00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2388 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3649 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.162 Using shallow fetch with depth 1 00:00:00.162 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.162 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.314 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.314 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.322 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.335 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.349 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.349 > git config core.sparsecheckout # timeout=10 00:00:04.363 > git read-tree -mu HEAD # timeout=10 00:00:04.380 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.405 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.405 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.520 [Pipeline] Start of Pipeline 00:00:04.546 [Pipeline] library 00:00:04.555 Loading library shm_lib@master 00:00:04.556 Library shm_lib@master is cached. Copying from home. 00:00:04.582 [Pipeline] node 00:00:19.588 Still waiting to schedule task 00:00:19.588 Waiting for next available executor on ‘vagrant-vm-host’ 00:05:11.133 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:11.135 [Pipeline] { 00:05:11.148 [Pipeline] catchError 00:05:11.149 [Pipeline] { 00:05:11.169 [Pipeline] wrap 00:05:11.182 [Pipeline] { 00:05:11.193 [Pipeline] stage 00:05:11.195 [Pipeline] { (Prologue) 00:05:11.215 [Pipeline] echo 00:05:11.217 Node: VM-host-WFP7 00:05:11.224 [Pipeline] cleanWs 00:05:11.234 [WS-CLEANUP] Deleting project workspace... 00:05:11.234 [WS-CLEANUP] Deferred wipeout is used... 00:05:11.241 [WS-CLEANUP] done 00:05:11.437 [Pipeline] setCustomBuildProperty 00:05:11.537 [Pipeline] httpRequest 00:05:11.851 [Pipeline] echo 00:05:11.853 Sorcerer 10.211.164.20 is alive 00:05:11.864 [Pipeline] retry 00:05:11.867 [Pipeline] { 00:05:11.884 [Pipeline] httpRequest 00:05:11.889 HttpMethod: GET 00:05:11.889 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:11.890 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:11.891 Response Code: HTTP/1.1 200 OK 00:05:11.891 Success: Status code 200 is in the accepted range: 200,404 00:05:11.892 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:12.038 [Pipeline] } 00:05:12.055 [Pipeline] // retry 00:05:12.064 [Pipeline] sh 00:05:12.350 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:12.416 [Pipeline] httpRequest 00:05:12.723 [Pipeline] echo 00:05:12.725 Sorcerer 10.211.164.20 is alive 00:05:12.735 [Pipeline] retry 00:05:12.737 [Pipeline] { 00:05:12.753 [Pipeline] httpRequest 00:05:12.758 HttpMethod: GET 00:05:12.759 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:05:12.759 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:05:12.760 Response Code: HTTP/1.1 200 OK 00:05:12.761 Success: Status code 200 is in the accepted range: 200,404 00:05:12.761 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:05:14.989 [Pipeline] } 00:05:15.008 [Pipeline] // retry 00:05:15.017 [Pipeline] sh 00:05:15.300 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:05:18.597 [Pipeline] sh 00:05:18.878 + git -C spdk log --oneline -n5 00:05:18.878 c13c99a5e test: Various fixes for Fedora40 00:05:18.878 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:05:18.878 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:05:18.878 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:05:18.878 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:05:18.897 [Pipeline] writeFile 00:05:18.914 [Pipeline] sh 00:05:19.195 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:19.208 [Pipeline] sh 00:05:19.497 + cat autorun-spdk.conf 00:05:19.497 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:19.497 SPDK_TEST_NVMF=1 00:05:19.497 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:19.497 SPDK_TEST_VFIOUSER=1 00:05:19.497 SPDK_TEST_USDT=1 00:05:19.497 SPDK_RUN_UBSAN=1 00:05:19.497 SPDK_TEST_NVMF_MDNS=1 00:05:19.497 NET_TYPE=virt 00:05:19.497 SPDK_JSONRPC_GO_CLIENT=1 00:05:19.497 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:19.513 RUN_NIGHTLY=1 00:05:19.515 [Pipeline] } 00:05:19.530 [Pipeline] // stage 00:05:19.546 [Pipeline] stage 00:05:19.548 [Pipeline] { (Run VM) 00:05:19.561 [Pipeline] sh 00:05:19.844 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:19.844 + echo 'Start stage prepare_nvme.sh' 00:05:19.844 Start stage prepare_nvme.sh 00:05:19.844 + [[ -n 6 ]] 00:05:19.844 + disk_prefix=ex6 00:05:19.844 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:05:19.844 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:05:19.844 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:05:19.844 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:19.844 ++ SPDK_TEST_NVMF=1 00:05:19.844 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:19.844 ++ SPDK_TEST_VFIOUSER=1 00:05:19.844 ++ SPDK_TEST_USDT=1 00:05:19.844 ++ SPDK_RUN_UBSAN=1 00:05:19.844 ++ SPDK_TEST_NVMF_MDNS=1 00:05:19.844 ++ NET_TYPE=virt 00:05:19.844 ++ SPDK_JSONRPC_GO_CLIENT=1 00:05:19.844 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:19.844 ++ RUN_NIGHTLY=1 00:05:19.844 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:19.844 + nvme_files=() 00:05:19.844 + declare -A nvme_files 00:05:19.844 + backend_dir=/var/lib/libvirt/images/backends 00:05:19.844 + nvme_files['nvme.img']=5G 00:05:19.844 + nvme_files['nvme-cmb.img']=5G 00:05:19.845 + nvme_files['nvme-multi0.img']=4G 00:05:19.845 + nvme_files['nvme-multi1.img']=4G 00:05:19.845 + nvme_files['nvme-multi2.img']=4G 00:05:19.845 + nvme_files['nvme-openstack.img']=8G 00:05:19.845 + nvme_files['nvme-zns.img']=5G 00:05:19.845 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:19.845 + (( SPDK_TEST_FTL == 1 )) 00:05:19.845 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:19.845 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:05:19.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:05:19.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:05:19.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:05:19.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:05:19.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:05:19.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:19.845 + for nvme in "${!nvme_files[@]}" 00:05:19.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:05:20.414 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:20.414 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:05:20.414 + echo 'End stage prepare_nvme.sh' 00:05:20.414 End stage prepare_nvme.sh 00:05:20.427 [Pipeline] sh 00:05:20.710 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:20.710 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:05:20.710 00:05:20.710 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:05:20.710 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:05:20.710 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:20.710 HELP=0 00:05:20.710 DRY_RUN=0 00:05:20.710 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:05:20.710 NVME_DISKS_TYPE=nvme,nvme, 00:05:20.710 NVME_AUTO_CREATE=0 00:05:20.710 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:05:20.710 NVME_CMB=,, 00:05:20.710 NVME_PMR=,, 00:05:20.710 NVME_ZNS=,, 00:05:20.710 NVME_MS=,, 00:05:20.710 NVME_FDP=,, 00:05:20.710 SPDK_VAGRANT_DISTRO=fedora39 00:05:20.710 SPDK_VAGRANT_VMCPU=10 00:05:20.710 SPDK_VAGRANT_VMRAM=12288 00:05:20.710 SPDK_VAGRANT_PROVIDER=libvirt 00:05:20.711 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:20.711 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:20.711 SPDK_OPENSTACK_NETWORK=0 00:05:20.711 VAGRANT_PACKAGE_BOX=0 00:05:20.711 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:20.711 FORCE_DISTRO=true 00:05:20.711 VAGRANT_BOX_VERSION= 00:05:20.711 EXTRA_VAGRANTFILES= 00:05:20.711 NIC_MODEL=virtio 00:05:20.711 00:05:20.711 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:05:20.711 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:23.998 Bringing machine 'default' up with 'libvirt' provider... 00:05:23.998 ==> default: Creating image (snapshot of base box volume). 00:05:24.258 ==> default: Creating domain with the following settings... 00:05:24.258 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732102376_dd5d2c7811fcb11d304f 00:05:24.258 ==> default: -- Domain type: kvm 00:05:24.258 ==> default: -- Cpus: 10 00:05:24.258 ==> default: -- Feature: acpi 00:05:24.258 ==> default: -- Feature: apic 00:05:24.258 ==> default: -- Feature: pae 00:05:24.258 ==> default: -- Memory: 12288M 00:05:24.258 ==> default: -- Memory Backing: hugepages: 00:05:24.259 ==> default: -- Management MAC: 00:05:24.259 ==> default: -- Loader: 00:05:24.259 ==> default: -- Nvram: 00:05:24.259 ==> default: -- Base box: spdk/fedora39 00:05:24.259 ==> default: -- Storage pool: default 00:05:24.259 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732102376_dd5d2c7811fcb11d304f.img (20G) 00:05:24.259 ==> default: -- Volume Cache: default 00:05:24.259 ==> default: -- Kernel: 00:05:24.259 ==> default: -- Initrd: 00:05:24.259 ==> default: -- Graphics Type: vnc 00:05:24.259 ==> default: -- Graphics Port: -1 00:05:24.259 ==> default: -- Graphics IP: 127.0.0.1 00:05:24.259 ==> default: -- Graphics Password: Not defined 00:05:24.259 ==> default: -- Video Type: cirrus 00:05:24.259 ==> default: -- Video VRAM: 9216 00:05:24.259 ==> default: -- Sound Type: 00:05:24.259 ==> default: -- Keymap: en-us 00:05:24.259 ==> default: -- TPM Path: 00:05:24.259 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:24.259 ==> default: -- Command line args: 00:05:24.259 ==> default: -> value=-device, 00:05:24.259 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:05:24.259 ==> default: -> value=-drive, 00:05:24.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:05:24.259 ==> default: -> value=-device, 00:05:24.259 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:24.259 ==> default: -> value=-device, 00:05:24.259 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:05:24.259 ==> default: -> value=-drive, 00:05:24.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:24.259 ==> default: -> value=-device, 00:05:24.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:24.259 ==> default: -> value=-drive, 00:05:24.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:24.259 ==> default: -> value=-device, 00:05:24.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:24.259 ==> default: -> value=-drive, 00:05:24.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:24.259 ==> default: -> value=-device, 00:05:24.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:24.518 ==> default: Creating shared folders metadata... 00:05:24.518 ==> default: Starting domain. 00:05:25.901 ==> default: Waiting for domain to get an IP address... 00:05:44.064 ==> default: Waiting for SSH to become available... 00:05:44.064 ==> default: Configuring and enabling network interfaces... 00:05:48.260 default: SSH address: 192.168.121.237:22 00:05:48.260 default: SSH username: vagrant 00:05:48.260 default: SSH auth method: private key 00:05:50.800 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:58.934 ==> default: Mounting SSHFS shared folder... 00:06:00.840 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:00.840 ==> default: Checking Mount.. 00:06:02.222 ==> default: Folder Successfully Mounted! 00:06:02.222 ==> default: Running provisioner: file... 00:06:03.598 default: ~/.gitconfig => .gitconfig 00:06:03.860 00:06:03.860 SUCCESS! 00:06:03.860 00:06:03.860 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:03.860 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:03.860 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:03.860 00:06:03.869 [Pipeline] } 00:06:03.882 [Pipeline] // stage 00:06:03.892 [Pipeline] dir 00:06:03.892 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:06:03.894 [Pipeline] { 00:06:03.904 [Pipeline] catchError 00:06:03.906 [Pipeline] { 00:06:03.916 [Pipeline] sh 00:06:04.194 + + sedvagrant -ne ssh-config /^Host/,$p --host 00:06:04.194 vagrant 00:06:04.194 + tee ssh_conf 00:06:07.507 Host vagrant 00:06:07.507 HostName 192.168.121.237 00:06:07.507 User vagrant 00:06:07.507 Port 22 00:06:07.507 UserKnownHostsFile /dev/null 00:06:07.507 StrictHostKeyChecking no 00:06:07.507 PasswordAuthentication no 00:06:07.507 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:07.507 IdentitiesOnly yes 00:06:07.507 LogLevel FATAL 00:06:07.507 ForwardAgent yes 00:06:07.507 ForwardX11 yes 00:06:07.507 00:06:07.530 [Pipeline] withEnv 00:06:07.533 [Pipeline] { 00:06:07.546 [Pipeline] sh 00:06:07.847 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:07.847 source /etc/os-release 00:06:07.847 [[ -e /image.version ]] && img=$(< /image.version) 00:06:07.847 # Minimal, systemd-like check. 00:06:07.847 if [[ -e /.dockerenv ]]; then 00:06:07.847 # Clear garbage from the node's name: 00:06:07.847 # agt-er_autotest_547-896 -> autotest_547-896 00:06:07.847 # $HOSTNAME is the actual container id 00:06:07.847 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:07.847 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:07.847 # We can assume this is a mount from a host where container is running, 00:06:07.847 # so fetch its hostname to easily identify the target swarm worker. 00:06:07.847 container="$(< /etc/hostname) ($agent)" 00:06:07.847 else 00:06:07.847 # Fallback 00:06:07.847 container=$agent 00:06:07.847 fi 00:06:07.847 fi 00:06:07.847 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:07.847 00:06:08.118 [Pipeline] } 00:06:08.134 [Pipeline] // withEnv 00:06:08.145 [Pipeline] setCustomBuildProperty 00:06:08.164 [Pipeline] stage 00:06:08.166 [Pipeline] { (Tests) 00:06:08.184 [Pipeline] sh 00:06:08.464 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:08.736 [Pipeline] sh 00:06:09.017 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:09.293 [Pipeline] timeout 00:06:09.293 Timeout set to expire in 1 hr 0 min 00:06:09.295 [Pipeline] { 00:06:09.310 [Pipeline] sh 00:06:09.594 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:10.162 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:06:10.176 [Pipeline] sh 00:06:10.458 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:10.731 [Pipeline] sh 00:06:11.011 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:11.286 [Pipeline] sh 00:06:11.566 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:06:11.825 ++ readlink -f spdk_repo 00:06:11.825 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:11.825 + [[ -n /home/vagrant/spdk_repo ]] 00:06:11.825 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:11.825 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:11.825 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:11.825 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:11.825 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:11.825 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:06:11.825 + cd /home/vagrant/spdk_repo 00:06:11.825 + source /etc/os-release 00:06:11.825 ++ NAME='Fedora Linux' 00:06:11.825 ++ VERSION='39 (Cloud Edition)' 00:06:11.825 ++ ID=fedora 00:06:11.825 ++ VERSION_ID=39 00:06:11.825 ++ VERSION_CODENAME= 00:06:11.825 ++ PLATFORM_ID=platform:f39 00:06:11.825 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:11.825 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:11.825 ++ LOGO=fedora-logo-icon 00:06:11.826 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:11.826 ++ HOME_URL=https://fedoraproject.org/ 00:06:11.826 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:11.826 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:11.826 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:11.826 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:11.826 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:11.826 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:11.826 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:11.826 ++ SUPPORT_END=2024-11-12 00:06:11.826 ++ VARIANT='Cloud Edition' 00:06:11.826 ++ VARIANT_ID=cloud 00:06:11.826 + uname -a 00:06:11.826 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:11.826 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:12.085 Hugepages 00:06:12.085 node hugesize free / total 00:06:12.085 node0 1048576kB 0 / 0 00:06:12.085 node0 2048kB 0 / 0 00:06:12.085 00:06:12.085 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:12.085 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:12.085 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:12.085 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:12.085 + rm -f /tmp/spdk-ld-path 00:06:12.085 + source autorun-spdk.conf 00:06:12.085 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:12.085 ++ SPDK_TEST_NVMF=1 00:06:12.085 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:12.085 ++ SPDK_TEST_VFIOUSER=1 00:06:12.085 ++ SPDK_TEST_USDT=1 00:06:12.085 ++ SPDK_RUN_UBSAN=1 00:06:12.085 ++ SPDK_TEST_NVMF_MDNS=1 00:06:12.085 ++ NET_TYPE=virt 00:06:12.085 ++ SPDK_JSONRPC_GO_CLIENT=1 00:06:12.085 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:12.085 ++ RUN_NIGHTLY=1 00:06:12.085 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:12.085 + [[ -n '' ]] 00:06:12.085 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:12.085 + for M in /var/spdk/build-*-manifest.txt 00:06:12.085 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:12.085 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:12.085 + for M in /var/spdk/build-*-manifest.txt 00:06:12.085 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:12.085 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:12.085 + for M in /var/spdk/build-*-manifest.txt 00:06:12.085 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:12.085 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:12.085 ++ uname 00:06:12.085 + [[ Linux == \L\i\n\u\x ]] 00:06:12.086 + sudo dmesg -T 00:06:12.086 + sudo dmesg --clear 00:06:12.345 + dmesg_pid=5390 00:06:12.345 + [[ Fedora Linux == FreeBSD ]] 00:06:12.345 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:12.345 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:12.345 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:12.345 + sudo dmesg -Tw 00:06:12.345 + [[ -x /usr/src/fio-static/fio ]] 00:06:12.345 + export FIO_BIN=/usr/src/fio-static/fio 00:06:12.345 + FIO_BIN=/usr/src/fio-static/fio 00:06:12.345 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:12.345 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:12.345 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:12.345 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:12.345 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:12.345 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:12.345 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:12.345 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:12.345 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:12.345 Test configuration: 00:06:12.345 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:12.345 SPDK_TEST_NVMF=1 00:06:12.345 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:12.345 SPDK_TEST_VFIOUSER=1 00:06:12.345 SPDK_TEST_USDT=1 00:06:12.345 SPDK_RUN_UBSAN=1 00:06:12.345 SPDK_TEST_NVMF_MDNS=1 00:06:12.345 NET_TYPE=virt 00:06:12.345 SPDK_JSONRPC_GO_CLIENT=1 00:06:12.345 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:12.345 RUN_NIGHTLY=1 11:33:45 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:06:12.345 11:33:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.345 11:33:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:12.345 11:33:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.345 11:33:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.345 11:33:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.345 11:33:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.345 11:33:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.345 11:33:45 -- paths/export.sh@5 -- $ export PATH 00:06:12.345 11:33:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.345 11:33:45 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:12.345 11:33:45 -- common/autobuild_common.sh@440 -- $ date +%s 00:06:12.345 11:33:45 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732102425.XXXXXX 00:06:12.345 11:33:45 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732102425.GBQSK9 00:06:12.345 11:33:45 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:06:12.345 11:33:45 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:06:12.345 11:33:45 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:12.345 11:33:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:12.345 11:33:45 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:12.345 11:33:45 -- common/autobuild_common.sh@456 -- $ get_config_params 00:06:12.345 11:33:45 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:06:12.345 11:33:45 -- common/autotest_common.sh@10 -- $ set +x 00:06:12.345 11:33:45 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:06:12.345 11:33:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:12.345 11:33:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:12.345 11:33:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:12.345 11:33:45 -- spdk/autobuild.sh@16 -- $ date -u 00:06:12.345 Wed Nov 20 11:33:45 AM UTC 2024 00:06:12.346 11:33:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:12.346 LTS-67-gc13c99a5e 00:06:12.346 11:33:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:12.346 11:33:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:12.346 11:33:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:12.346 11:33:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:06:12.346 11:33:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:06:12.346 11:33:45 -- common/autotest_common.sh@10 -- $ set +x 00:06:12.346 ************************************ 00:06:12.346 START TEST ubsan 00:06:12.346 ************************************ 00:06:12.346 using ubsan 00:06:12.346 11:33:45 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:06:12.346 00:06:12.346 real 0m0.001s 00:06:12.346 user 0m0.000s 00:06:12.346 sys 0m0.000s 00:06:12.346 11:33:45 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:06:12.346 11:33:45 -- common/autotest_common.sh@10 -- $ set +x 00:06:12.346 ************************************ 00:06:12.346 END TEST ubsan 00:06:12.346 ************************************ 00:06:12.346 11:33:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:12.346 11:33:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:12.346 11:33:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:12.346 11:33:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:12.346 11:33:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:12.346 11:33:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:12.346 11:33:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:12.346 11:33:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:12.346 11:33:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:06:12.605 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:12.605 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:13.172 Using 'verbs' RDMA provider 00:06:28.663 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:06:43.548 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:06:43.548 go version go1.21.1 linux/amd64 00:06:43.548 Creating mk/config.mk...done. 00:06:43.548 Creating mk/cc.flags.mk...done. 00:06:43.548 Type 'make' to build. 00:06:43.548 11:34:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:06:43.548 11:34:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:06:43.548 11:34:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:06:43.548 11:34:16 -- common/autotest_common.sh@10 -- $ set +x 00:06:43.549 ************************************ 00:06:43.549 START TEST make 00:06:43.549 ************************************ 00:06:43.549 11:34:16 -- common/autotest_common.sh@1114 -- $ make -j10 00:06:44.115 make[1]: Nothing to be done for 'all'. 00:06:45.048 The Meson build system 00:06:45.048 Version: 1.5.0 00:06:45.048 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:06:45.048 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:06:45.048 Build type: native build 00:06:45.048 Project name: libvfio-user 00:06:45.048 Project version: 0.0.1 00:06:45.048 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:45.048 C linker for the host machine: cc ld.bfd 2.40-14 00:06:45.048 Host machine cpu family: x86_64 00:06:45.048 Host machine cpu: x86_64 00:06:45.048 Run-time dependency threads found: YES 00:06:45.048 Library dl found: YES 00:06:45.048 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:45.048 Run-time dependency json-c found: YES 0.17 00:06:45.048 Run-time dependency cmocka found: YES 1.1.7 00:06:45.048 Program pytest-3 found: NO 00:06:45.048 Program flake8 found: NO 00:06:45.048 Program misspell-fixer found: NO 00:06:45.048 Program restructuredtext-lint found: NO 00:06:45.048 Program valgrind found: YES (/usr/bin/valgrind) 00:06:45.048 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:45.048 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:45.048 Compiler for C supports arguments -Wwrite-strings: YES 00:06:45.048 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:45.048 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:06:45.048 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:06:45.048 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:45.048 Build targets in project: 8 00:06:45.048 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:45.048 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:45.048 00:06:45.048 libvfio-user 0.0.1 00:06:45.048 00:06:45.048 User defined options 00:06:45.048 buildtype : debug 00:06:45.048 default_library: shared 00:06:45.048 libdir : /usr/local/lib 00:06:45.048 00:06:45.048 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:45.615 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:06:45.874 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:45.874 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:45.874 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:45.874 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:45.874 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:45.874 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:45.874 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:46.133 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:46.133 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:46.133 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:46.133 [11/37] Compiling C object samples/server.p/server.c.o 00:06:46.133 [12/37] Compiling C object samples/client.p/client.c.o 00:06:46.133 [13/37] Compiling C object samples/null.p/null.c.o 00:06:46.133 [14/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:46.133 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:46.133 [16/37] Linking target samples/client 00:06:46.133 [17/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:46.133 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:46.133 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:46.133 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:46.133 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:46.407 [22/37] Linking target lib/libvfio-user.so.0.0.1 00:06:46.407 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:46.407 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:46.407 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:46.407 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:46.407 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:46.407 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:46.407 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:46.407 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:46.407 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:46.407 [32/37] Linking target test/unit_tests 00:06:46.407 [33/37] Linking target samples/server 00:06:46.407 [34/37] Linking target samples/shadow_ioeventfd_server 00:06:46.407 [35/37] Linking target samples/gpio-pci-idio-16 00:06:46.407 [36/37] Linking target samples/null 00:06:46.407 [37/37] Linking target samples/lspci 00:06:46.407 INFO: autodetecting backend as ninja 00:06:46.407 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:06:46.407 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:06:46.975 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:06:46.975 ninja: no work to do. 00:06:56.969 The Meson build system 00:06:56.969 Version: 1.5.0 00:06:56.969 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:56.969 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:56.969 Build type: native build 00:06:56.969 Program cat found: YES (/usr/bin/cat) 00:06:56.969 Project name: DPDK 00:06:56.969 Project version: 23.11.0 00:06:56.969 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:56.969 C linker for the host machine: cc ld.bfd 2.40-14 00:06:56.969 Host machine cpu family: x86_64 00:06:56.969 Host machine cpu: x86_64 00:06:56.969 Message: ## Building in Developer Mode ## 00:06:56.969 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:56.969 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:56.969 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:56.969 Program python3 found: YES (/usr/bin/python3) 00:06:56.969 Program cat found: YES (/usr/bin/cat) 00:06:56.969 Compiler for C supports arguments -march=native: YES 00:06:56.969 Checking for size of "void *" : 8 00:06:56.969 Checking for size of "void *" : 8 (cached) 00:06:56.969 Library m found: YES 00:06:56.969 Library numa found: YES 00:06:56.969 Has header "numaif.h" : YES 00:06:56.969 Library fdt found: NO 00:06:56.969 Library execinfo found: NO 00:06:56.969 Has header "execinfo.h" : YES 00:06:56.969 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:56.969 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:56.969 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:56.969 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:56.969 Run-time dependency openssl found: YES 3.1.1 00:06:56.969 Run-time dependency libpcap found: YES 1.10.4 00:06:56.969 Has header "pcap.h" with dependency libpcap: YES 00:06:56.969 Compiler for C supports arguments -Wcast-qual: YES 00:06:56.969 Compiler for C supports arguments -Wdeprecated: YES 00:06:56.969 Compiler for C supports arguments -Wformat: YES 00:06:56.969 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:56.969 Compiler for C supports arguments -Wformat-security: NO 00:06:56.969 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:56.969 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:56.969 Compiler for C supports arguments -Wnested-externs: YES 00:06:56.969 Compiler for C supports arguments -Wold-style-definition: YES 00:06:56.969 Compiler for C supports arguments -Wpointer-arith: YES 00:06:56.969 Compiler for C supports arguments -Wsign-compare: YES 00:06:56.969 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:56.969 Compiler for C supports arguments -Wundef: YES 00:06:56.969 Compiler for C supports arguments -Wwrite-strings: YES 00:06:56.969 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:56.969 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:56.969 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:56.969 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:56.969 Program objdump found: YES (/usr/bin/objdump) 00:06:56.969 Compiler for C supports arguments -mavx512f: YES 00:06:56.969 Checking if "AVX512 checking" compiles: YES 00:06:56.969 Fetching value of define "__SSE4_2__" : 1 00:06:56.969 Fetching value of define "__AES__" : 1 00:06:56.969 Fetching value of define "__AVX__" : 1 00:06:56.969 Fetching value of define "__AVX2__" : 1 00:06:56.969 Fetching value of define "__AVX512BW__" : 1 00:06:56.969 Fetching value of define "__AVX512CD__" : 1 00:06:56.969 Fetching value of define "__AVX512DQ__" : 1 00:06:56.969 Fetching value of define "__AVX512F__" : 1 00:06:56.969 Fetching value of define "__AVX512VL__" : 1 00:06:56.969 Fetching value of define "__PCLMUL__" : 1 00:06:56.969 Fetching value of define "__RDRND__" : 1 00:06:56.969 Fetching value of define "__RDSEED__" : 1 00:06:56.969 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:56.969 Fetching value of define "__znver1__" : (undefined) 00:06:56.969 Fetching value of define "__znver2__" : (undefined) 00:06:56.970 Fetching value of define "__znver3__" : (undefined) 00:06:56.970 Fetching value of define "__znver4__" : (undefined) 00:06:56.970 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:56.970 Message: lib/log: Defining dependency "log" 00:06:56.970 Message: lib/kvargs: Defining dependency "kvargs" 00:06:56.970 Message: lib/telemetry: Defining dependency "telemetry" 00:06:56.970 Checking for function "getentropy" : NO 00:06:56.970 Message: lib/eal: Defining dependency "eal" 00:06:56.970 Message: lib/ring: Defining dependency "ring" 00:06:56.970 Message: lib/rcu: Defining dependency "rcu" 00:06:56.970 Message: lib/mempool: Defining dependency "mempool" 00:06:56.970 Message: lib/mbuf: Defining dependency "mbuf" 00:06:56.970 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:56.970 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:56.970 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:56.970 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:56.970 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:56.970 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:56.970 Compiler for C supports arguments -mpclmul: YES 00:06:56.970 Compiler for C supports arguments -maes: YES 00:06:56.970 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:56.970 Compiler for C supports arguments -mavx512bw: YES 00:06:56.970 Compiler for C supports arguments -mavx512dq: YES 00:06:56.970 Compiler for C supports arguments -mavx512vl: YES 00:06:56.970 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:56.970 Compiler for C supports arguments -mavx2: YES 00:06:56.970 Compiler for C supports arguments -mavx: YES 00:06:56.970 Message: lib/net: Defining dependency "net" 00:06:56.970 Message: lib/meter: Defining dependency "meter" 00:06:56.970 Message: lib/ethdev: Defining dependency "ethdev" 00:06:56.970 Message: lib/pci: Defining dependency "pci" 00:06:56.970 Message: lib/cmdline: Defining dependency "cmdline" 00:06:56.970 Message: lib/hash: Defining dependency "hash" 00:06:56.970 Message: lib/timer: Defining dependency "timer" 00:06:56.970 Message: lib/compressdev: Defining dependency "compressdev" 00:06:56.970 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:56.970 Message: lib/dmadev: Defining dependency "dmadev" 00:06:56.970 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:56.970 Message: lib/power: Defining dependency "power" 00:06:56.970 Message: lib/reorder: Defining dependency "reorder" 00:06:56.970 Message: lib/security: Defining dependency "security" 00:06:56.970 Has header "linux/userfaultfd.h" : YES 00:06:56.970 Has header "linux/vduse.h" : YES 00:06:56.970 Message: lib/vhost: Defining dependency "vhost" 00:06:56.970 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:56.970 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:56.970 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:56.970 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:56.970 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:56.970 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:56.970 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:56.970 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:56.970 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:56.970 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:56.970 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:56.970 Configuring doxy-api-html.conf using configuration 00:06:56.970 Configuring doxy-api-man.conf using configuration 00:06:56.970 Program mandb found: YES (/usr/bin/mandb) 00:06:56.970 Program sphinx-build found: NO 00:06:56.970 Configuring rte_build_config.h using configuration 00:06:56.970 Message: 00:06:56.970 ================= 00:06:56.970 Applications Enabled 00:06:56.970 ================= 00:06:56.970 00:06:56.970 apps: 00:06:56.970 00:06:56.970 00:06:56.970 Message: 00:06:56.970 ================= 00:06:56.970 Libraries Enabled 00:06:56.970 ================= 00:06:56.970 00:06:56.970 libs: 00:06:56.970 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:56.970 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:56.970 cryptodev, dmadev, power, reorder, security, vhost, 00:06:56.970 00:06:56.970 Message: 00:06:56.970 =============== 00:06:56.970 Drivers Enabled 00:06:56.970 =============== 00:06:56.970 00:06:56.970 common: 00:06:56.970 00:06:56.970 bus: 00:06:56.970 pci, vdev, 00:06:56.970 mempool: 00:06:56.970 ring, 00:06:56.970 dma: 00:06:56.970 00:06:56.970 net: 00:06:56.970 00:06:56.970 crypto: 00:06:56.970 00:06:56.970 compress: 00:06:56.970 00:06:56.970 vdpa: 00:06:56.970 00:06:56.970 00:06:56.970 Message: 00:06:56.970 ================= 00:06:56.970 Content Skipped 00:06:56.970 ================= 00:06:56.970 00:06:56.970 apps: 00:06:56.970 dumpcap: explicitly disabled via build config 00:06:56.970 graph: explicitly disabled via build config 00:06:56.970 pdump: explicitly disabled via build config 00:06:56.970 proc-info: explicitly disabled via build config 00:06:56.970 test-acl: explicitly disabled via build config 00:06:56.970 test-bbdev: explicitly disabled via build config 00:06:56.970 test-cmdline: explicitly disabled via build config 00:06:56.970 test-compress-perf: explicitly disabled via build config 00:06:56.970 test-crypto-perf: explicitly disabled via build config 00:06:56.970 test-dma-perf: explicitly disabled via build config 00:06:56.970 test-eventdev: explicitly disabled via build config 00:06:56.970 test-fib: explicitly disabled via build config 00:06:56.970 test-flow-perf: explicitly disabled via build config 00:06:56.970 test-gpudev: explicitly disabled via build config 00:06:56.970 test-mldev: explicitly disabled via build config 00:06:56.970 test-pipeline: explicitly disabled via build config 00:06:56.970 test-pmd: explicitly disabled via build config 00:06:56.970 test-regex: explicitly disabled via build config 00:06:56.970 test-sad: explicitly disabled via build config 00:06:56.970 test-security-perf: explicitly disabled via build config 00:06:56.970 00:06:56.970 libs: 00:06:56.970 metrics: explicitly disabled via build config 00:06:56.970 acl: explicitly disabled via build config 00:06:56.970 bbdev: explicitly disabled via build config 00:06:56.970 bitratestats: explicitly disabled via build config 00:06:56.970 bpf: explicitly disabled via build config 00:06:56.970 cfgfile: explicitly disabled via build config 00:06:56.970 distributor: explicitly disabled via build config 00:06:56.970 efd: explicitly disabled via build config 00:06:56.970 eventdev: explicitly disabled via build config 00:06:56.970 dispatcher: explicitly disabled via build config 00:06:56.970 gpudev: explicitly disabled via build config 00:06:56.970 gro: explicitly disabled via build config 00:06:56.970 gso: explicitly disabled via build config 00:06:56.970 ip_frag: explicitly disabled via build config 00:06:56.970 jobstats: explicitly disabled via build config 00:06:56.970 latencystats: explicitly disabled via build config 00:06:56.970 lpm: explicitly disabled via build config 00:06:56.970 member: explicitly disabled via build config 00:06:56.970 pcapng: explicitly disabled via build config 00:06:56.970 rawdev: explicitly disabled via build config 00:06:56.970 regexdev: explicitly disabled via build config 00:06:56.970 mldev: explicitly disabled via build config 00:06:56.970 rib: explicitly disabled via build config 00:06:56.970 sched: explicitly disabled via build config 00:06:56.970 stack: explicitly disabled via build config 00:06:56.970 ipsec: explicitly disabled via build config 00:06:56.970 pdcp: explicitly disabled via build config 00:06:56.970 fib: explicitly disabled via build config 00:06:56.970 port: explicitly disabled via build config 00:06:56.970 pdump: explicitly disabled via build config 00:06:56.970 table: explicitly disabled via build config 00:06:56.970 pipeline: explicitly disabled via build config 00:06:56.970 graph: explicitly disabled via build config 00:06:56.970 node: explicitly disabled via build config 00:06:56.970 00:06:56.970 drivers: 00:06:56.970 common/cpt: not in enabled drivers build config 00:06:56.970 common/dpaax: not in enabled drivers build config 00:06:56.970 common/iavf: not in enabled drivers build config 00:06:56.970 common/idpf: not in enabled drivers build config 00:06:56.970 common/mvep: not in enabled drivers build config 00:06:56.970 common/octeontx: not in enabled drivers build config 00:06:56.970 bus/auxiliary: not in enabled drivers build config 00:06:56.970 bus/cdx: not in enabled drivers build config 00:06:56.970 bus/dpaa: not in enabled drivers build config 00:06:56.970 bus/fslmc: not in enabled drivers build config 00:06:56.970 bus/ifpga: not in enabled drivers build config 00:06:56.970 bus/platform: not in enabled drivers build config 00:06:56.970 bus/vmbus: not in enabled drivers build config 00:06:56.970 common/cnxk: not in enabled drivers build config 00:06:56.970 common/mlx5: not in enabled drivers build config 00:06:56.970 common/nfp: not in enabled drivers build config 00:06:56.970 common/qat: not in enabled drivers build config 00:06:56.970 common/sfc_efx: not in enabled drivers build config 00:06:56.970 mempool/bucket: not in enabled drivers build config 00:06:56.970 mempool/cnxk: not in enabled drivers build config 00:06:56.970 mempool/dpaa: not in enabled drivers build config 00:06:56.970 mempool/dpaa2: not in enabled drivers build config 00:06:56.970 mempool/octeontx: not in enabled drivers build config 00:06:56.970 mempool/stack: not in enabled drivers build config 00:06:56.970 dma/cnxk: not in enabled drivers build config 00:06:56.970 dma/dpaa: not in enabled drivers build config 00:06:56.970 dma/dpaa2: not in enabled drivers build config 00:06:56.970 dma/hisilicon: not in enabled drivers build config 00:06:56.970 dma/idxd: not in enabled drivers build config 00:06:56.970 dma/ioat: not in enabled drivers build config 00:06:56.970 dma/skeleton: not in enabled drivers build config 00:06:56.970 net/af_packet: not in enabled drivers build config 00:06:56.970 net/af_xdp: not in enabled drivers build config 00:06:56.970 net/ark: not in enabled drivers build config 00:06:56.970 net/atlantic: not in enabled drivers build config 00:06:56.970 net/avp: not in enabled drivers build config 00:06:56.971 net/axgbe: not in enabled drivers build config 00:06:56.971 net/bnx2x: not in enabled drivers build config 00:06:56.971 net/bnxt: not in enabled drivers build config 00:06:56.971 net/bonding: not in enabled drivers build config 00:06:56.971 net/cnxk: not in enabled drivers build config 00:06:56.971 net/cpfl: not in enabled drivers build config 00:06:56.971 net/cxgbe: not in enabled drivers build config 00:06:56.971 net/dpaa: not in enabled drivers build config 00:06:56.971 net/dpaa2: not in enabled drivers build config 00:06:56.971 net/e1000: not in enabled drivers build config 00:06:56.971 net/ena: not in enabled drivers build config 00:06:56.971 net/enetc: not in enabled drivers build config 00:06:56.971 net/enetfec: not in enabled drivers build config 00:06:56.971 net/enic: not in enabled drivers build config 00:06:56.971 net/failsafe: not in enabled drivers build config 00:06:56.971 net/fm10k: not in enabled drivers build config 00:06:56.971 net/gve: not in enabled drivers build config 00:06:56.971 net/hinic: not in enabled drivers build config 00:06:56.971 net/hns3: not in enabled drivers build config 00:06:56.971 net/i40e: not in enabled drivers build config 00:06:56.971 net/iavf: not in enabled drivers build config 00:06:56.971 net/ice: not in enabled drivers build config 00:06:56.971 net/idpf: not in enabled drivers build config 00:06:56.971 net/igc: not in enabled drivers build config 00:06:56.971 net/ionic: not in enabled drivers build config 00:06:56.971 net/ipn3ke: not in enabled drivers build config 00:06:56.971 net/ixgbe: not in enabled drivers build config 00:06:56.971 net/mana: not in enabled drivers build config 00:06:56.971 net/memif: not in enabled drivers build config 00:06:56.971 net/mlx4: not in enabled drivers build config 00:06:56.971 net/mlx5: not in enabled drivers build config 00:06:56.971 net/mvneta: not in enabled drivers build config 00:06:56.971 net/mvpp2: not in enabled drivers build config 00:06:56.971 net/netvsc: not in enabled drivers build config 00:06:56.971 net/nfb: not in enabled drivers build config 00:06:56.971 net/nfp: not in enabled drivers build config 00:06:56.971 net/ngbe: not in enabled drivers build config 00:06:56.971 net/null: not in enabled drivers build config 00:06:56.971 net/octeontx: not in enabled drivers build config 00:06:56.971 net/octeon_ep: not in enabled drivers build config 00:06:56.971 net/pcap: not in enabled drivers build config 00:06:56.971 net/pfe: not in enabled drivers build config 00:06:56.971 net/qede: not in enabled drivers build config 00:06:56.971 net/ring: not in enabled drivers build config 00:06:56.971 net/sfc: not in enabled drivers build config 00:06:56.971 net/softnic: not in enabled drivers build config 00:06:56.971 net/tap: not in enabled drivers build config 00:06:56.971 net/thunderx: not in enabled drivers build config 00:06:56.971 net/txgbe: not in enabled drivers build config 00:06:56.971 net/vdev_netvsc: not in enabled drivers build config 00:06:56.971 net/vhost: not in enabled drivers build config 00:06:56.971 net/virtio: not in enabled drivers build config 00:06:56.971 net/vmxnet3: not in enabled drivers build config 00:06:56.971 raw/*: missing internal dependency, "rawdev" 00:06:56.971 crypto/armv8: not in enabled drivers build config 00:06:56.971 crypto/bcmfs: not in enabled drivers build config 00:06:56.971 crypto/caam_jr: not in enabled drivers build config 00:06:56.971 crypto/ccp: not in enabled drivers build config 00:06:56.971 crypto/cnxk: not in enabled drivers build config 00:06:56.971 crypto/dpaa_sec: not in enabled drivers build config 00:06:56.971 crypto/dpaa2_sec: not in enabled drivers build config 00:06:56.971 crypto/ipsec_mb: not in enabled drivers build config 00:06:56.971 crypto/mlx5: not in enabled drivers build config 00:06:56.971 crypto/mvsam: not in enabled drivers build config 00:06:56.971 crypto/nitrox: not in enabled drivers build config 00:06:56.971 crypto/null: not in enabled drivers build config 00:06:56.971 crypto/octeontx: not in enabled drivers build config 00:06:56.971 crypto/openssl: not in enabled drivers build config 00:06:56.971 crypto/scheduler: not in enabled drivers build config 00:06:56.971 crypto/uadk: not in enabled drivers build config 00:06:56.971 crypto/virtio: not in enabled drivers build config 00:06:56.971 compress/isal: not in enabled drivers build config 00:06:56.971 compress/mlx5: not in enabled drivers build config 00:06:56.971 compress/octeontx: not in enabled drivers build config 00:06:56.971 compress/zlib: not in enabled drivers build config 00:06:56.971 regex/*: missing internal dependency, "regexdev" 00:06:56.971 ml/*: missing internal dependency, "mldev" 00:06:56.971 vdpa/ifc: not in enabled drivers build config 00:06:56.971 vdpa/mlx5: not in enabled drivers build config 00:06:56.971 vdpa/nfp: not in enabled drivers build config 00:06:56.971 vdpa/sfc: not in enabled drivers build config 00:06:56.971 event/*: missing internal dependency, "eventdev" 00:06:56.971 baseband/*: missing internal dependency, "bbdev" 00:06:56.971 gpu/*: missing internal dependency, "gpudev" 00:06:56.971 00:06:56.971 00:06:56.971 Build targets in project: 85 00:06:56.971 00:06:56.971 DPDK 23.11.0 00:06:56.971 00:06:56.971 User defined options 00:06:56.971 buildtype : debug 00:06:56.971 default_library : shared 00:06:56.971 libdir : lib 00:06:56.971 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:56.971 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:06:56.971 c_link_args : 00:06:56.971 cpu_instruction_set: native 00:06:56.971 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:56.971 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:56.971 enable_docs : false 00:06:56.971 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:56.971 enable_kmods : false 00:06:56.971 tests : false 00:06:56.971 00:06:56.971 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:56.971 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:56.971 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:56.971 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:56.971 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:56.971 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:56.971 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:56.971 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:56.971 [7/265] Linking static target lib/librte_kvargs.a 00:06:56.971 [8/265] Linking static target lib/librte_log.a 00:06:56.971 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:56.971 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:57.246 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:57.246 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:57.246 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:57.246 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:57.246 [15/265] Linking static target lib/librte_telemetry.a 00:06:57.246 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:57.246 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:57.505 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:57.505 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:57.505 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:57.505 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:57.505 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:57.763 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:57.763 [24/265] Linking target lib/librte_log.so.24.0 00:06:57.763 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:57.763 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:58.021 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:58.021 [28/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:06:58.021 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:58.022 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:58.022 [31/265] Linking target lib/librte_kvargs.so.24.0 00:06:58.022 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.022 [33/265] Linking target lib/librte_telemetry.so.24.0 00:06:58.280 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:58.280 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:58.280 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:58.280 [37/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:06:58.280 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:58.280 [39/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:06:58.280 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:58.280 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:58.539 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:58.539 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:58.539 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:58.539 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:58.540 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:58.798 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:58.798 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:59.056 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:59.056 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:59.056 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:59.056 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:59.056 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:59.056 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:59.056 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:59.056 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:59.056 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:59.314 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:59.314 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:59.314 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:59.314 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:59.314 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:59.572 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:59.572 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:59.572 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:59.572 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:59.832 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:59.832 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:59.832 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:59.832 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:59.832 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:00.090 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:00.090 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:00.090 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:00.090 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:00.090 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:00.090 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:00.090 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:00.349 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:00.349 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:00.349 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:00.349 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:00.609 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:00.609 [84/265] Linking static target lib/librte_ring.a 00:07:00.609 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:00.609 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:00.609 [87/265] Linking static target lib/librte_eal.a 00:07:00.609 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:00.868 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:00.868 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:00.868 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:00.868 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:00.868 [93/265] Linking static target lib/librte_rcu.a 00:07:00.868 [94/265] Linking static target lib/librte_mempool.a 00:07:01.127 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:01.127 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:01.385 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:01.385 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:01.385 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:01.385 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:01.385 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:01.385 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:01.644 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:01.644 [104/265] Linking static target lib/librte_mbuf.a 00:07:01.644 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:01.644 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:01.903 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:01.903 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:01.903 [109/265] Linking static target lib/librte_net.a 00:07:01.903 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:01.903 [111/265] Linking static target lib/librte_meter.a 00:07:02.162 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:02.162 [113/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.162 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:02.420 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:02.420 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.420 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.679 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:02.679 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.937 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:02.937 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:02.937 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:03.195 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:03.195 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:03.195 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:03.195 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:03.195 [127/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:03.195 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:03.195 [129/265] Linking static target lib/librte_pci.a 00:07:03.195 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:03.195 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:03.453 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:03.453 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:03.453 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:03.453 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.453 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:03.712 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:03.712 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:03.712 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:03.712 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:03.712 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:03.712 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:03.712 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:03.712 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:03.712 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:03.712 [146/265] Linking static target lib/librte_cmdline.a 00:07:03.969 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:03.969 [148/265] Linking static target lib/librte_ethdev.a 00:07:03.969 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:03.969 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:04.228 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:04.228 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:04.228 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:04.228 [154/265] Linking static target lib/librte_timer.a 00:07:04.228 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:04.228 [156/265] Linking static target lib/librte_compressdev.a 00:07:04.487 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:04.488 [158/265] Linking static target lib/librte_hash.a 00:07:04.488 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:04.488 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:04.746 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:04.746 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:04.746 [163/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.746 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:04.746 [165/265] Linking static target lib/librte_dmadev.a 00:07:04.746 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:05.004 [167/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.004 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:05.004 [169/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:05.004 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:05.004 [171/265] Linking static target lib/librte_cryptodev.a 00:07:05.263 [172/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:05.263 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:05.263 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.263 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.523 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.523 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:05.523 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:05.523 [179/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:05.523 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:05.523 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:05.782 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:05.782 [183/265] Linking static target lib/librte_power.a 00:07:05.782 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:05.782 [185/265] Linking static target lib/librte_reorder.a 00:07:06.042 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:06.042 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:06.042 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:06.301 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:06.301 [190/265] Linking static target lib/librte_security.a 00:07:06.301 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:06.560 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.820 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:06.820 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:06.820 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:06.820 [196/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.081 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:07.081 [198/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.081 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:07.340 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:07.340 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:07.340 [202/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.614 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:07.614 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:07.614 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:07.614 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:07.614 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:07.614 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:07.614 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:07.891 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:07.891 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:07.891 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:07.891 [213/265] Linking static target drivers/librte_bus_pci.a 00:07:07.891 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:07.891 [215/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:07.891 [216/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:07.891 [217/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:07.891 [218/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:07.891 [219/265] Linking static target drivers/librte_bus_vdev.a 00:07:08.150 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:08.150 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:08.150 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:08.150 [223/265] Linking static target drivers/librte_mempool_ring.a 00:07:08.150 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.409 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.976 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:08.976 [227/265] Linking static target lib/librte_vhost.a 00:07:11.516 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.516 [229/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.773 [230/265] Linking target lib/librte_eal.so.24.0 00:07:11.774 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:07:11.774 [232/265] Linking target lib/librte_timer.so.24.0 00:07:11.774 [233/265] Linking target lib/librte_ring.so.24.0 00:07:11.774 [234/265] Linking target lib/librte_dmadev.so.24.0 00:07:11.774 [235/265] Linking target lib/librte_pci.so.24.0 00:07:11.774 [236/265] Linking target lib/librte_meter.so.24.0 00:07:11.774 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:07:12.032 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:07:12.032 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:07:12.032 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:07:12.032 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:07:12.032 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:07:12.032 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:07:12.032 [244/265] Linking target lib/librte_rcu.so.24.0 00:07:12.032 [245/265] Linking target lib/librte_mempool.so.24.0 00:07:12.290 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:07:12.290 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:07:12.290 [248/265] Linking target lib/librte_mbuf.so.24.0 00:07:12.290 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:07:12.290 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:07:12.550 [251/265] Linking target lib/librte_compressdev.so.24.0 00:07:12.550 [252/265] Linking target lib/librte_net.so.24.0 00:07:12.550 [253/265] Linking target lib/librte_reorder.so.24.0 00:07:12.550 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:07:12.550 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:07:12.550 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:07:12.550 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.550 [258/265] Linking target lib/librte_hash.so.24.0 00:07:12.550 [259/265] Linking target lib/librte_cmdline.so.24.0 00:07:12.550 [260/265] Linking target lib/librte_security.so.24.0 00:07:12.808 [261/265] Linking target lib/librte_ethdev.so.24.0 00:07:12.808 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:07:12.808 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:07:12.808 [264/265] Linking target lib/librte_power.so.24.0 00:07:13.069 [265/265] Linking target lib/librte_vhost.so.24.0 00:07:13.069 INFO: autodetecting backend as ninja 00:07:13.069 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:14.007 CC lib/ut_mock/mock.o 00:07:14.007 CC lib/log/log.o 00:07:14.007 CC lib/log/log_flags.o 00:07:14.007 CC lib/log/log_deprecated.o 00:07:14.007 CC lib/ut/ut.o 00:07:14.265 LIB libspdk_ut_mock.a 00:07:14.265 LIB libspdk_ut.a 00:07:14.265 SO libspdk_ut_mock.so.5.0 00:07:14.265 SO libspdk_ut.so.1.0 00:07:14.265 LIB libspdk_log.a 00:07:14.265 SYMLINK libspdk_ut_mock.so 00:07:14.265 SYMLINK libspdk_ut.so 00:07:14.265 SO libspdk_log.so.6.1 00:07:14.524 SYMLINK libspdk_log.so 00:07:14.524 CXX lib/trace_parser/trace.o 00:07:14.524 CC lib/ioat/ioat.o 00:07:14.524 CC lib/util/base64.o 00:07:14.524 CC lib/util/cpuset.o 00:07:14.524 CC lib/dma/dma.o 00:07:14.524 CC lib/util/bit_array.o 00:07:14.524 CC lib/util/crc16.o 00:07:14.524 CC lib/util/crc32.o 00:07:14.524 CC lib/util/crc32c.o 00:07:14.784 CC lib/vfio_user/host/vfio_user_pci.o 00:07:14.784 CC lib/vfio_user/host/vfio_user.o 00:07:14.784 CC lib/util/crc32_ieee.o 00:07:14.784 CC lib/util/crc64.o 00:07:14.784 CC lib/util/dif.o 00:07:14.784 CC lib/util/fd.o 00:07:14.784 CC lib/util/file.o 00:07:14.784 LIB libspdk_dma.a 00:07:14.784 LIB libspdk_ioat.a 00:07:14.784 SO libspdk_dma.so.3.0 00:07:14.784 SO libspdk_ioat.so.6.0 00:07:14.784 CC lib/util/hexlify.o 00:07:14.784 CC lib/util/iov.o 00:07:15.044 SYMLINK libspdk_ioat.so 00:07:15.044 SYMLINK libspdk_dma.so 00:07:15.044 CC lib/util/math.o 00:07:15.044 CC lib/util/pipe.o 00:07:15.044 CC lib/util/strerror_tls.o 00:07:15.044 LIB libspdk_vfio_user.a 00:07:15.044 CC lib/util/string.o 00:07:15.044 CC lib/util/uuid.o 00:07:15.044 SO libspdk_vfio_user.so.4.0 00:07:15.044 SYMLINK libspdk_vfio_user.so 00:07:15.044 CC lib/util/fd_group.o 00:07:15.044 CC lib/util/xor.o 00:07:15.044 CC lib/util/zipf.o 00:07:15.304 LIB libspdk_util.a 00:07:15.304 SO libspdk_util.so.8.0 00:07:15.562 SYMLINK libspdk_util.so 00:07:15.562 LIB libspdk_trace_parser.a 00:07:15.562 SO libspdk_trace_parser.so.4.0 00:07:15.562 CC lib/vmd/led.o 00:07:15.562 CC lib/vmd/vmd.o 00:07:15.562 CC lib/conf/conf.o 00:07:15.562 SYMLINK libspdk_trace_parser.so 00:07:15.562 CC lib/idxd/idxd.o 00:07:15.562 CC lib/rdma/rdma_verbs.o 00:07:15.562 CC lib/rdma/common.o 00:07:15.562 CC lib/idxd/idxd_user.o 00:07:15.562 CC lib/env_dpdk/env.o 00:07:15.562 CC lib/idxd/idxd_kernel.o 00:07:15.562 CC lib/json/json_parse.o 00:07:15.821 CC lib/json/json_util.o 00:07:15.821 CC lib/env_dpdk/memory.o 00:07:15.821 CC lib/env_dpdk/pci.o 00:07:15.821 LIB libspdk_conf.a 00:07:15.821 CC lib/env_dpdk/init.o 00:07:15.821 SO libspdk_conf.so.5.0 00:07:15.821 LIB libspdk_rdma.a 00:07:15.821 SYMLINK libspdk_conf.so 00:07:15.821 CC lib/json/json_write.o 00:07:15.821 SO libspdk_rdma.so.5.0 00:07:15.821 CC lib/env_dpdk/threads.o 00:07:15.821 SYMLINK libspdk_rdma.so 00:07:15.821 CC lib/env_dpdk/pci_ioat.o 00:07:15.821 CC lib/env_dpdk/pci_virtio.o 00:07:16.081 CC lib/env_dpdk/pci_vmd.o 00:07:16.081 CC lib/env_dpdk/pci_idxd.o 00:07:16.081 CC lib/env_dpdk/pci_event.o 00:07:16.081 LIB libspdk_idxd.a 00:07:16.081 SO libspdk_idxd.so.11.0 00:07:16.081 CC lib/env_dpdk/sigbus_handler.o 00:07:16.081 LIB libspdk_json.a 00:07:16.081 CC lib/env_dpdk/pci_dpdk.o 00:07:16.081 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:16.081 SO libspdk_json.so.5.1 00:07:16.081 SYMLINK libspdk_idxd.so 00:07:16.081 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:16.081 LIB libspdk_vmd.a 00:07:16.353 SO libspdk_vmd.so.5.0 00:07:16.353 SYMLINK libspdk_json.so 00:07:16.353 SYMLINK libspdk_vmd.so 00:07:16.353 CC lib/jsonrpc/jsonrpc_server.o 00:07:16.353 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:16.353 CC lib/jsonrpc/jsonrpc_client.o 00:07:16.353 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:16.615 LIB libspdk_jsonrpc.a 00:07:16.615 SO libspdk_jsonrpc.so.5.1 00:07:16.875 SYMLINK libspdk_jsonrpc.so 00:07:17.133 LIB libspdk_env_dpdk.a 00:07:17.133 CC lib/rpc/rpc.o 00:07:17.133 SO libspdk_env_dpdk.so.13.0 00:07:17.133 LIB libspdk_rpc.a 00:07:17.133 SYMLINK libspdk_env_dpdk.so 00:07:17.393 SO libspdk_rpc.so.5.0 00:07:17.393 SYMLINK libspdk_rpc.so 00:07:17.653 CC lib/notify/notify.o 00:07:17.653 CC lib/trace/trace_flags.o 00:07:17.653 CC lib/trace/trace.o 00:07:17.653 CC lib/trace/trace_rpc.o 00:07:17.653 CC lib/notify/notify_rpc.o 00:07:17.653 CC lib/sock/sock_rpc.o 00:07:17.653 CC lib/sock/sock.o 00:07:17.653 LIB libspdk_notify.a 00:07:17.913 SO libspdk_notify.so.5.0 00:07:17.913 LIB libspdk_trace.a 00:07:17.913 SYMLINK libspdk_notify.so 00:07:17.913 SO libspdk_trace.so.9.0 00:07:17.913 SYMLINK libspdk_trace.so 00:07:17.913 LIB libspdk_sock.a 00:07:18.172 SO libspdk_sock.so.8.0 00:07:18.172 SYMLINK libspdk_sock.so 00:07:18.172 CC lib/thread/thread.o 00:07:18.172 CC lib/thread/iobuf.o 00:07:18.478 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:18.478 CC lib/nvme/nvme_ctrlr.o 00:07:18.478 CC lib/nvme/nvme_fabric.o 00:07:18.478 CC lib/nvme/nvme_qpair.o 00:07:18.478 CC lib/nvme/nvme_ns_cmd.o 00:07:18.478 CC lib/nvme/nvme_pcie.o 00:07:18.478 CC lib/nvme/nvme_ns.o 00:07:18.478 CC lib/nvme/nvme_pcie_common.o 00:07:18.478 CC lib/nvme/nvme.o 00:07:19.044 CC lib/nvme/nvme_quirks.o 00:07:19.045 CC lib/nvme/nvme_transport.o 00:07:19.045 CC lib/nvme/nvme_discovery.o 00:07:19.045 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:19.303 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:19.303 CC lib/nvme/nvme_tcp.o 00:07:19.303 CC lib/nvme/nvme_opal.o 00:07:19.303 CC lib/nvme/nvme_io_msg.o 00:07:19.561 CC lib/nvme/nvme_poll_group.o 00:07:19.561 LIB libspdk_thread.a 00:07:19.820 CC lib/nvme/nvme_zns.o 00:07:19.820 SO libspdk_thread.so.9.0 00:07:19.820 CC lib/nvme/nvme_cuse.o 00:07:19.820 CC lib/nvme/nvme_vfio_user.o 00:07:19.820 SYMLINK libspdk_thread.so 00:07:19.820 CC lib/nvme/nvme_rdma.o 00:07:19.820 CC lib/accel/accel.o 00:07:19.820 CC lib/blob/blobstore.o 00:07:20.078 CC lib/blob/request.o 00:07:20.078 CC lib/blob/zeroes.o 00:07:20.337 CC lib/blob/blob_bs_dev.o 00:07:20.337 CC lib/accel/accel_rpc.o 00:07:20.337 CC lib/init/json_config.o 00:07:20.337 CC lib/virtio/virtio.o 00:07:20.337 CC lib/virtio/virtio_vhost_user.o 00:07:20.594 CC lib/virtio/virtio_vfio_user.o 00:07:20.594 CC lib/init/subsystem.o 00:07:20.594 CC lib/accel/accel_sw.o 00:07:20.594 CC lib/init/subsystem_rpc.o 00:07:20.594 CC lib/init/rpc.o 00:07:20.853 CC lib/virtio/virtio_pci.o 00:07:20.853 LIB libspdk_init.a 00:07:20.853 SO libspdk_init.so.4.0 00:07:20.853 LIB libspdk_accel.a 00:07:20.853 CC lib/vfu_tgt/tgt_rpc.o 00:07:20.853 CC lib/vfu_tgt/tgt_endpoint.o 00:07:20.853 SYMLINK libspdk_init.so 00:07:20.853 SO libspdk_accel.so.14.0 00:07:21.111 SYMLINK libspdk_accel.so 00:07:21.111 LIB libspdk_virtio.a 00:07:21.111 LIB libspdk_nvme.a 00:07:21.111 SO libspdk_virtio.so.6.0 00:07:21.111 CC lib/event/app.o 00:07:21.111 CC lib/event/reactor.o 00:07:21.111 CC lib/event/app_rpc.o 00:07:21.111 CC lib/event/log_rpc.o 00:07:21.111 CC lib/event/scheduler_static.o 00:07:21.111 SYMLINK libspdk_virtio.so 00:07:21.111 LIB libspdk_vfu_tgt.a 00:07:21.111 CC lib/bdev/bdev.o 00:07:21.111 CC lib/bdev/bdev_rpc.o 00:07:21.111 SO libspdk_nvme.so.12.0 00:07:21.373 SO libspdk_vfu_tgt.so.2.0 00:07:21.373 CC lib/bdev/bdev_zone.o 00:07:21.373 CC lib/bdev/part.o 00:07:21.373 SYMLINK libspdk_vfu_tgt.so 00:07:21.373 CC lib/bdev/scsi_nvme.o 00:07:21.373 SYMLINK libspdk_nvme.so 00:07:21.631 LIB libspdk_event.a 00:07:21.631 SO libspdk_event.so.12.0 00:07:21.631 SYMLINK libspdk_event.so 00:07:22.566 LIB libspdk_blob.a 00:07:22.566 SO libspdk_blob.so.10.1 00:07:22.566 SYMLINK libspdk_blob.so 00:07:22.825 CC lib/blobfs/tree.o 00:07:22.825 CC lib/blobfs/blobfs.o 00:07:22.825 CC lib/lvol/lvol.o 00:07:23.765 LIB libspdk_bdev.a 00:07:23.765 LIB libspdk_blobfs.a 00:07:23.765 SO libspdk_bdev.so.14.0 00:07:23.765 SO libspdk_blobfs.so.9.0 00:07:23.765 LIB libspdk_lvol.a 00:07:23.765 SYMLINK libspdk_bdev.so 00:07:23.765 SYMLINK libspdk_blobfs.so 00:07:23.765 SO libspdk_lvol.so.9.1 00:07:23.765 SYMLINK libspdk_lvol.so 00:07:23.765 CC lib/nvmf/ctrlr.o 00:07:23.765 CC lib/nvmf/ctrlr_bdev.o 00:07:23.765 CC lib/nvmf/ctrlr_discovery.o 00:07:23.765 CC lib/nvmf/subsystem.o 00:07:23.765 CC lib/nvmf/nvmf.o 00:07:23.765 CC lib/nvmf/nvmf_rpc.o 00:07:23.765 CC lib/nbd/nbd.o 00:07:23.765 CC lib/ublk/ublk.o 00:07:24.025 CC lib/scsi/dev.o 00:07:24.025 CC lib/ftl/ftl_core.o 00:07:24.025 CC lib/scsi/lun.o 00:07:24.284 CC lib/nbd/nbd_rpc.o 00:07:24.284 CC lib/ftl/ftl_init.o 00:07:24.284 CC lib/nvmf/transport.o 00:07:24.284 CC lib/scsi/port.o 00:07:24.543 LIB libspdk_nbd.a 00:07:24.543 SO libspdk_nbd.so.6.0 00:07:24.543 CC lib/ftl/ftl_layout.o 00:07:24.543 SYMLINK libspdk_nbd.so 00:07:24.543 CC lib/ftl/ftl_debug.o 00:07:24.543 CC lib/ublk/ublk_rpc.o 00:07:24.543 CC lib/nvmf/tcp.o 00:07:24.543 CC lib/scsi/scsi.o 00:07:24.543 CC lib/nvmf/vfio_user.o 00:07:24.802 LIB libspdk_ublk.a 00:07:24.802 SO libspdk_ublk.so.2.0 00:07:24.802 CC lib/scsi/scsi_bdev.o 00:07:24.802 CC lib/scsi/scsi_pr.o 00:07:24.802 CC lib/scsi/scsi_rpc.o 00:07:24.802 CC lib/ftl/ftl_io.o 00:07:24.802 SYMLINK libspdk_ublk.so 00:07:24.802 CC lib/scsi/task.o 00:07:24.802 CC lib/nvmf/rdma.o 00:07:25.062 CC lib/ftl/ftl_sb.o 00:07:25.062 CC lib/ftl/ftl_l2p.o 00:07:25.062 CC lib/ftl/ftl_l2p_flat.o 00:07:25.062 CC lib/ftl/ftl_nv_cache.o 00:07:25.062 CC lib/ftl/ftl_band.o 00:07:25.062 CC lib/ftl/ftl_band_ops.o 00:07:25.062 LIB libspdk_scsi.a 00:07:25.062 CC lib/ftl/ftl_writer.o 00:07:25.321 CC lib/ftl/ftl_rq.o 00:07:25.321 SO libspdk_scsi.so.8.0 00:07:25.321 SYMLINK libspdk_scsi.so 00:07:25.321 CC lib/ftl/ftl_reloc.o 00:07:25.321 CC lib/iscsi/conn.o 00:07:25.321 CC lib/iscsi/init_grp.o 00:07:25.580 CC lib/iscsi/iscsi.o 00:07:25.580 CC lib/ftl/ftl_l2p_cache.o 00:07:25.580 CC lib/vhost/vhost.o 00:07:25.580 CC lib/iscsi/md5.o 00:07:25.839 CC lib/iscsi/param.o 00:07:25.839 CC lib/iscsi/portal_grp.o 00:07:25.839 CC lib/ftl/ftl_p2l.o 00:07:25.839 CC lib/ftl/mngt/ftl_mngt.o 00:07:26.098 CC lib/iscsi/tgt_node.o 00:07:26.098 CC lib/vhost/vhost_rpc.o 00:07:26.098 CC lib/iscsi/iscsi_subsystem.o 00:07:26.098 CC lib/iscsi/iscsi_rpc.o 00:07:26.098 CC lib/vhost/vhost_scsi.o 00:07:26.098 CC lib/iscsi/task.o 00:07:26.098 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:26.357 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:26.357 CC lib/vhost/vhost_blk.o 00:07:26.357 CC lib/vhost/rte_vhost_user.o 00:07:26.357 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:26.357 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:26.357 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:26.357 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:26.636 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:26.636 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:26.636 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:26.636 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:26.636 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:26.636 LIB libspdk_iscsi.a 00:07:26.636 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:26.636 LIB libspdk_nvmf.a 00:07:26.897 SO libspdk_iscsi.so.7.0 00:07:26.897 CC lib/ftl/utils/ftl_conf.o 00:07:26.897 SO libspdk_nvmf.so.17.0 00:07:26.897 CC lib/ftl/utils/ftl_md.o 00:07:26.897 CC lib/ftl/utils/ftl_mempool.o 00:07:26.897 CC lib/ftl/utils/ftl_bitmap.o 00:07:26.897 CC lib/ftl/utils/ftl_property.o 00:07:26.897 SYMLINK libspdk_iscsi.so 00:07:26.897 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:27.156 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:27.156 SYMLINK libspdk_nvmf.so 00:07:27.156 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:27.156 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:27.156 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:27.156 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:27.156 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:27.156 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:27.156 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:27.156 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:27.156 CC lib/ftl/base/ftl_base_dev.o 00:07:27.415 CC lib/ftl/base/ftl_base_bdev.o 00:07:27.415 CC lib/ftl/ftl_trace.o 00:07:27.415 LIB libspdk_vhost.a 00:07:27.415 SO libspdk_vhost.so.7.1 00:07:27.415 SYMLINK libspdk_vhost.so 00:07:27.674 LIB libspdk_ftl.a 00:07:27.674 SO libspdk_ftl.so.8.0 00:07:27.934 SYMLINK libspdk_ftl.so 00:07:28.193 CC module/env_dpdk/env_dpdk_rpc.o 00:07:28.193 CC module/vfu_device/vfu_virtio.o 00:07:28.193 CC module/blob/bdev/blob_bdev.o 00:07:28.193 CC module/accel/dsa/accel_dsa.o 00:07:28.193 CC module/scheduler/gscheduler/gscheduler.o 00:07:28.193 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:28.193 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:28.193 CC module/accel/ioat/accel_ioat.o 00:07:28.193 CC module/sock/posix/posix.o 00:07:28.193 CC module/accel/error/accel_error.o 00:07:28.452 LIB libspdk_env_dpdk_rpc.a 00:07:28.452 SO libspdk_env_dpdk_rpc.so.5.0 00:07:28.452 LIB libspdk_scheduler_gscheduler.a 00:07:28.452 LIB libspdk_scheduler_dpdk_governor.a 00:07:28.452 SO libspdk_scheduler_gscheduler.so.3.0 00:07:28.452 SYMLINK libspdk_env_dpdk_rpc.so 00:07:28.452 SO libspdk_scheduler_dpdk_governor.so.3.0 00:07:28.452 CC module/vfu_device/vfu_virtio_blk.o 00:07:28.452 LIB libspdk_scheduler_dynamic.a 00:07:28.452 CC module/accel/error/accel_error_rpc.o 00:07:28.452 SYMLINK libspdk_scheduler_gscheduler.so 00:07:28.452 CC module/accel/ioat/accel_ioat_rpc.o 00:07:28.452 CC module/vfu_device/vfu_virtio_scsi.o 00:07:28.452 SO libspdk_scheduler_dynamic.so.3.0 00:07:28.452 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:28.452 CC module/accel/dsa/accel_dsa_rpc.o 00:07:28.452 CC module/vfu_device/vfu_virtio_rpc.o 00:07:28.452 LIB libspdk_blob_bdev.a 00:07:28.452 SO libspdk_blob_bdev.so.10.1 00:07:28.452 SYMLINK libspdk_scheduler_dynamic.so 00:07:28.711 SYMLINK libspdk_blob_bdev.so 00:07:28.711 LIB libspdk_accel_error.a 00:07:28.711 LIB libspdk_accel_ioat.a 00:07:28.711 SO libspdk_accel_error.so.1.0 00:07:28.711 LIB libspdk_accel_dsa.a 00:07:28.711 SO libspdk_accel_ioat.so.5.0 00:07:28.711 CC module/accel/iaa/accel_iaa.o 00:07:28.711 SO libspdk_accel_dsa.so.4.0 00:07:28.711 SYMLINK libspdk_accel_error.so 00:07:28.711 CC module/accel/iaa/accel_iaa_rpc.o 00:07:28.711 SYMLINK libspdk_accel_ioat.so 00:07:28.711 SYMLINK libspdk_accel_dsa.so 00:07:28.711 CC module/bdev/delay/vbdev_delay.o 00:07:28.711 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:28.711 LIB libspdk_vfu_device.a 00:07:28.971 CC module/bdev/error/vbdev_error.o 00:07:28.971 CC module/bdev/gpt/gpt.o 00:07:28.971 CC module/bdev/lvol/vbdev_lvol.o 00:07:28.971 LIB libspdk_accel_iaa.a 00:07:28.971 SO libspdk_vfu_device.so.2.0 00:07:28.971 CC module/blobfs/bdev/blobfs_bdev.o 00:07:28.971 SO libspdk_accel_iaa.so.2.0 00:07:28.971 CC module/bdev/malloc/bdev_malloc.o 00:07:28.971 SYMLINK libspdk_vfu_device.so 00:07:28.971 LIB libspdk_sock_posix.a 00:07:28.971 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:28.971 SYMLINK libspdk_accel_iaa.so 00:07:28.971 CC module/bdev/gpt/vbdev_gpt.o 00:07:28.971 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:28.971 SO libspdk_sock_posix.so.5.0 00:07:28.971 CC module/bdev/error/vbdev_error_rpc.o 00:07:28.971 SYMLINK libspdk_sock_posix.so 00:07:28.971 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:29.239 LIB libspdk_bdev_delay.a 00:07:29.239 LIB libspdk_blobfs_bdev.a 00:07:29.239 SO libspdk_bdev_delay.so.5.0 00:07:29.239 CC module/bdev/null/bdev_null.o 00:07:29.239 SO libspdk_blobfs_bdev.so.5.0 00:07:29.239 LIB libspdk_bdev_error.a 00:07:29.239 CC module/bdev/nvme/bdev_nvme.o 00:07:29.239 SO libspdk_bdev_error.so.5.0 00:07:29.239 LIB libspdk_bdev_gpt.a 00:07:29.239 SYMLINK libspdk_bdev_delay.so 00:07:29.239 SYMLINK libspdk_blobfs_bdev.so 00:07:29.239 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:29.239 CC module/bdev/null/bdev_null_rpc.o 00:07:29.239 SO libspdk_bdev_gpt.so.5.0 00:07:29.239 CC module/bdev/nvme/nvme_rpc.o 00:07:29.239 SYMLINK libspdk_bdev_error.so 00:07:29.239 LIB libspdk_bdev_malloc.a 00:07:29.239 SO libspdk_bdev_malloc.so.5.0 00:07:29.239 SYMLINK libspdk_bdev_gpt.so 00:07:29.239 LIB libspdk_bdev_lvol.a 00:07:29.239 SO libspdk_bdev_lvol.so.5.0 00:07:29.239 CC module/bdev/passthru/vbdev_passthru.o 00:07:29.521 SYMLINK libspdk_bdev_malloc.so 00:07:29.521 CC module/bdev/raid/bdev_raid.o 00:07:29.521 CC module/bdev/split/vbdev_split.o 00:07:29.521 SYMLINK libspdk_bdev_lvol.so 00:07:29.522 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:29.522 LIB libspdk_bdev_null.a 00:07:29.522 SO libspdk_bdev_null.so.5.0 00:07:29.522 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:29.522 CC module/bdev/nvme/bdev_mdns_client.o 00:07:29.522 CC module/bdev/aio/bdev_aio.o 00:07:29.522 SYMLINK libspdk_bdev_null.so 00:07:29.522 CC module/bdev/aio/bdev_aio_rpc.o 00:07:29.522 LIB libspdk_bdev_passthru.a 00:07:29.522 CC module/bdev/split/vbdev_split_rpc.o 00:07:29.522 SO libspdk_bdev_passthru.so.5.0 00:07:29.780 CC module/bdev/ftl/bdev_ftl.o 00:07:29.780 SYMLINK libspdk_bdev_passthru.so 00:07:29.780 CC module/bdev/nvme/vbdev_opal.o 00:07:29.780 CC module/bdev/iscsi/bdev_iscsi.o 00:07:29.780 LIB libspdk_bdev_split.a 00:07:29.780 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:29.780 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:29.780 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:29.780 LIB libspdk_bdev_aio.a 00:07:29.780 SO libspdk_bdev_split.so.5.0 00:07:29.780 SO libspdk_bdev_aio.so.5.0 00:07:29.780 SYMLINK libspdk_bdev_split.so 00:07:29.780 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:29.780 SYMLINK libspdk_bdev_aio.so 00:07:29.780 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:30.042 LIB libspdk_bdev_zone_block.a 00:07:30.042 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:30.042 SO libspdk_bdev_zone_block.so.5.0 00:07:30.042 CC module/bdev/raid/bdev_raid_rpc.o 00:07:30.042 CC module/bdev/raid/bdev_raid_sb.o 00:07:30.042 SYMLINK libspdk_bdev_zone_block.so 00:07:30.042 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:30.042 CC module/bdev/raid/raid0.o 00:07:30.042 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:30.042 CC module/bdev/raid/raid1.o 00:07:30.042 LIB libspdk_bdev_ftl.a 00:07:30.301 SO libspdk_bdev_ftl.so.5.0 00:07:30.301 CC module/bdev/raid/concat.o 00:07:30.301 SYMLINK libspdk_bdev_ftl.so 00:07:30.301 LIB libspdk_bdev_iscsi.a 00:07:30.301 LIB libspdk_bdev_virtio.a 00:07:30.301 SO libspdk_bdev_iscsi.so.5.0 00:07:30.301 SO libspdk_bdev_virtio.so.5.0 00:07:30.301 SYMLINK libspdk_bdev_iscsi.so 00:07:30.301 SYMLINK libspdk_bdev_virtio.so 00:07:30.301 LIB libspdk_bdev_raid.a 00:07:30.560 SO libspdk_bdev_raid.so.5.0 00:07:30.560 SYMLINK libspdk_bdev_raid.so 00:07:31.129 LIB libspdk_bdev_nvme.a 00:07:31.129 SO libspdk_bdev_nvme.so.6.0 00:07:31.389 SYMLINK libspdk_bdev_nvme.so 00:07:31.649 CC module/event/subsystems/scheduler/scheduler.o 00:07:31.649 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:31.649 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:31.649 CC module/event/subsystems/vmd/vmd.o 00:07:31.649 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:31.649 CC module/event/subsystems/sock/sock.o 00:07:31.649 CC module/event/subsystems/iobuf/iobuf.o 00:07:31.649 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:31.909 LIB libspdk_event_scheduler.a 00:07:31.909 LIB libspdk_event_vmd.a 00:07:31.909 LIB libspdk_event_sock.a 00:07:31.909 LIB libspdk_event_vhost_blk.a 00:07:31.909 LIB libspdk_event_vfu_tgt.a 00:07:31.909 SO libspdk_event_scheduler.so.3.0 00:07:31.909 SO libspdk_event_vmd.so.5.0 00:07:31.909 SO libspdk_event_sock.so.4.0 00:07:31.909 LIB libspdk_event_iobuf.a 00:07:31.909 SO libspdk_event_vfu_tgt.so.2.0 00:07:31.909 SO libspdk_event_vhost_blk.so.2.0 00:07:31.909 SYMLINK libspdk_event_scheduler.so 00:07:31.909 SO libspdk_event_iobuf.so.2.0 00:07:31.909 SYMLINK libspdk_event_sock.so 00:07:31.909 SYMLINK libspdk_event_vfu_tgt.so 00:07:31.909 SYMLINK libspdk_event_vmd.so 00:07:31.909 SYMLINK libspdk_event_vhost_blk.so 00:07:31.909 SYMLINK libspdk_event_iobuf.so 00:07:32.169 CC module/event/subsystems/accel/accel.o 00:07:32.428 LIB libspdk_event_accel.a 00:07:32.428 SO libspdk_event_accel.so.5.0 00:07:32.428 SYMLINK libspdk_event_accel.so 00:07:32.687 CC module/event/subsystems/bdev/bdev.o 00:07:32.947 LIB libspdk_event_bdev.a 00:07:32.947 SO libspdk_event_bdev.so.5.0 00:07:32.947 SYMLINK libspdk_event_bdev.so 00:07:33.206 CC module/event/subsystems/nbd/nbd.o 00:07:33.206 CC module/event/subsystems/ublk/ublk.o 00:07:33.206 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:33.206 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:33.206 CC module/event/subsystems/scsi/scsi.o 00:07:33.466 LIB libspdk_event_ublk.a 00:07:33.466 LIB libspdk_event_nbd.a 00:07:33.466 SO libspdk_event_ublk.so.2.0 00:07:33.466 LIB libspdk_event_scsi.a 00:07:33.466 SO libspdk_event_nbd.so.5.0 00:07:33.466 LIB libspdk_event_nvmf.a 00:07:33.466 SO libspdk_event_scsi.so.5.0 00:07:33.466 SYMLINK libspdk_event_ublk.so 00:07:33.466 SYMLINK libspdk_event_nbd.so 00:07:33.466 SO libspdk_event_nvmf.so.5.0 00:07:33.466 SYMLINK libspdk_event_scsi.so 00:07:33.466 SYMLINK libspdk_event_nvmf.so 00:07:33.725 CC module/event/subsystems/iscsi/iscsi.o 00:07:33.725 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:33.984 LIB libspdk_event_vhost_scsi.a 00:07:33.984 LIB libspdk_event_iscsi.a 00:07:33.984 SO libspdk_event_vhost_scsi.so.2.0 00:07:33.984 SO libspdk_event_iscsi.so.5.0 00:07:33.984 SYMLINK libspdk_event_vhost_scsi.so 00:07:33.984 SYMLINK libspdk_event_iscsi.so 00:07:34.244 SO libspdk.so.5.0 00:07:34.244 SYMLINK libspdk.so 00:07:34.596 CC app/trace_record/trace_record.o 00:07:34.596 CC app/spdk_nvme_identify/identify.o 00:07:34.596 CC app/spdk_lspci/spdk_lspci.o 00:07:34.596 CXX app/trace/trace.o 00:07:34.596 CC app/spdk_nvme_perf/perf.o 00:07:34.596 CC app/nvmf_tgt/nvmf_main.o 00:07:34.596 CC app/spdk_tgt/spdk_tgt.o 00:07:34.596 CC examples/accel/perf/accel_perf.o 00:07:34.596 CC app/iscsi_tgt/iscsi_tgt.o 00:07:34.596 CC test/accel/dif/dif.o 00:07:34.596 LINK spdk_lspci 00:07:34.596 LINK nvmf_tgt 00:07:34.596 LINK spdk_trace_record 00:07:34.596 LINK spdk_tgt 00:07:34.596 LINK iscsi_tgt 00:07:34.596 CC app/spdk_nvme_discover/discovery_aer.o 00:07:34.855 LINK spdk_trace 00:07:34.855 CC app/spdk_top/spdk_top.o 00:07:34.855 LINK dif 00:07:34.855 LINK accel_perf 00:07:34.855 CC app/vhost/vhost.o 00:07:34.855 LINK spdk_nvme_discover 00:07:34.855 CC app/spdk_dd/spdk_dd.o 00:07:35.115 CC test/app/bdev_svc/bdev_svc.o 00:07:35.115 CC app/fio/nvme/fio_plugin.o 00:07:35.115 LINK spdk_nvme_identify 00:07:35.115 LINK vhost 00:07:35.115 LINK spdk_nvme_perf 00:07:35.115 CC test/bdev/bdevio/bdevio.o 00:07:35.115 CC examples/bdev/hello_world/hello_bdev.o 00:07:35.115 LINK bdev_svc 00:07:35.115 CC examples/bdev/bdevperf/bdevperf.o 00:07:35.374 TEST_HEADER include/spdk/accel.h 00:07:35.374 TEST_HEADER include/spdk/accel_module.h 00:07:35.374 TEST_HEADER include/spdk/assert.h 00:07:35.374 TEST_HEADER include/spdk/barrier.h 00:07:35.374 TEST_HEADER include/spdk/base64.h 00:07:35.374 TEST_HEADER include/spdk/bdev.h 00:07:35.374 TEST_HEADER include/spdk/bdev_module.h 00:07:35.374 TEST_HEADER include/spdk/bdev_zone.h 00:07:35.374 TEST_HEADER include/spdk/bit_array.h 00:07:35.374 TEST_HEADER include/spdk/bit_pool.h 00:07:35.374 TEST_HEADER include/spdk/blob_bdev.h 00:07:35.374 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:35.374 TEST_HEADER include/spdk/blobfs.h 00:07:35.374 TEST_HEADER include/spdk/blob.h 00:07:35.374 TEST_HEADER include/spdk/conf.h 00:07:35.374 TEST_HEADER include/spdk/config.h 00:07:35.374 TEST_HEADER include/spdk/cpuset.h 00:07:35.374 TEST_HEADER include/spdk/crc16.h 00:07:35.374 TEST_HEADER include/spdk/crc32.h 00:07:35.374 TEST_HEADER include/spdk/crc64.h 00:07:35.374 TEST_HEADER include/spdk/dif.h 00:07:35.374 TEST_HEADER include/spdk/dma.h 00:07:35.374 LINK spdk_dd 00:07:35.374 TEST_HEADER include/spdk/endian.h 00:07:35.374 TEST_HEADER include/spdk/env_dpdk.h 00:07:35.374 TEST_HEADER include/spdk/env.h 00:07:35.374 TEST_HEADER include/spdk/event.h 00:07:35.374 TEST_HEADER include/spdk/fd_group.h 00:07:35.374 TEST_HEADER include/spdk/fd.h 00:07:35.374 TEST_HEADER include/spdk/file.h 00:07:35.374 TEST_HEADER include/spdk/ftl.h 00:07:35.374 TEST_HEADER include/spdk/gpt_spec.h 00:07:35.375 TEST_HEADER include/spdk/hexlify.h 00:07:35.375 CC test/blobfs/mkfs/mkfs.o 00:07:35.375 TEST_HEADER include/spdk/histogram_data.h 00:07:35.375 TEST_HEADER include/spdk/idxd.h 00:07:35.375 TEST_HEADER include/spdk/idxd_spec.h 00:07:35.375 TEST_HEADER include/spdk/init.h 00:07:35.375 TEST_HEADER include/spdk/ioat.h 00:07:35.375 TEST_HEADER include/spdk/ioat_spec.h 00:07:35.375 LINK hello_bdev 00:07:35.375 TEST_HEADER include/spdk/iscsi_spec.h 00:07:35.375 TEST_HEADER include/spdk/json.h 00:07:35.375 TEST_HEADER include/spdk/jsonrpc.h 00:07:35.375 TEST_HEADER include/spdk/likely.h 00:07:35.375 TEST_HEADER include/spdk/log.h 00:07:35.375 TEST_HEADER include/spdk/lvol.h 00:07:35.375 TEST_HEADER include/spdk/memory.h 00:07:35.375 TEST_HEADER include/spdk/mmio.h 00:07:35.375 TEST_HEADER include/spdk/nbd.h 00:07:35.375 TEST_HEADER include/spdk/notify.h 00:07:35.375 TEST_HEADER include/spdk/nvme.h 00:07:35.375 TEST_HEADER include/spdk/nvme_intel.h 00:07:35.375 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:35.375 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:35.375 TEST_HEADER include/spdk/nvme_spec.h 00:07:35.375 TEST_HEADER include/spdk/nvme_zns.h 00:07:35.375 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:35.375 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:35.375 TEST_HEADER include/spdk/nvmf.h 00:07:35.375 TEST_HEADER include/spdk/nvmf_spec.h 00:07:35.375 TEST_HEADER include/spdk/nvmf_transport.h 00:07:35.375 TEST_HEADER include/spdk/opal.h 00:07:35.375 TEST_HEADER include/spdk/opal_spec.h 00:07:35.375 TEST_HEADER include/spdk/pci_ids.h 00:07:35.375 TEST_HEADER include/spdk/pipe.h 00:07:35.375 TEST_HEADER include/spdk/queue.h 00:07:35.375 CC test/dma/test_dma/test_dma.o 00:07:35.375 TEST_HEADER include/spdk/reduce.h 00:07:35.375 TEST_HEADER include/spdk/rpc.h 00:07:35.375 TEST_HEADER include/spdk/scheduler.h 00:07:35.375 TEST_HEADER include/spdk/scsi.h 00:07:35.375 TEST_HEADER include/spdk/scsi_spec.h 00:07:35.375 TEST_HEADER include/spdk/sock.h 00:07:35.375 TEST_HEADER include/spdk/stdinc.h 00:07:35.375 TEST_HEADER include/spdk/string.h 00:07:35.375 TEST_HEADER include/spdk/thread.h 00:07:35.375 TEST_HEADER include/spdk/trace.h 00:07:35.375 TEST_HEADER include/spdk/trace_parser.h 00:07:35.375 TEST_HEADER include/spdk/tree.h 00:07:35.375 TEST_HEADER include/spdk/ublk.h 00:07:35.375 TEST_HEADER include/spdk/util.h 00:07:35.375 TEST_HEADER include/spdk/uuid.h 00:07:35.375 TEST_HEADER include/spdk/version.h 00:07:35.375 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:35.375 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:35.375 TEST_HEADER include/spdk/vhost.h 00:07:35.375 TEST_HEADER include/spdk/vmd.h 00:07:35.375 TEST_HEADER include/spdk/xor.h 00:07:35.375 TEST_HEADER include/spdk/zipf.h 00:07:35.375 CXX test/cpp_headers/accel.o 00:07:35.635 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:35.635 LINK bdevio 00:07:35.635 LINK spdk_nvme 00:07:35.635 LINK spdk_top 00:07:35.635 LINK mkfs 00:07:35.635 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:35.635 CXX test/cpp_headers/accel_module.o 00:07:35.635 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:35.635 CC app/fio/bdev/fio_plugin.o 00:07:35.635 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:35.894 CXX test/cpp_headers/assert.o 00:07:35.894 CXX test/cpp_headers/barrier.o 00:07:35.894 LINK test_dma 00:07:35.894 LINK nvme_fuzz 00:07:35.894 LINK bdevperf 00:07:35.894 CC examples/ioat/perf/perf.o 00:07:35.894 CXX test/cpp_headers/base64.o 00:07:35.894 CC examples/blob/hello_world/hello_blob.o 00:07:36.154 CC examples/blob/cli/blobcli.o 00:07:36.154 CC test/app/histogram_perf/histogram_perf.o 00:07:36.154 CXX test/cpp_headers/bdev.o 00:07:36.154 CC test/app/jsoncat/jsoncat.o 00:07:36.154 LINK ioat_perf 00:07:36.154 LINK spdk_bdev 00:07:36.154 CC test/app/stub/stub.o 00:07:36.154 LINK vhost_fuzz 00:07:36.154 LINK hello_blob 00:07:36.154 LINK histogram_perf 00:07:36.421 LINK jsoncat 00:07:36.421 CXX test/cpp_headers/bdev_module.o 00:07:36.421 CC examples/ioat/verify/verify.o 00:07:36.421 LINK stub 00:07:36.421 CC test/event/event_perf/event_perf.o 00:07:36.421 CC test/env/mem_callbacks/mem_callbacks.o 00:07:36.421 CC test/env/vtophys/vtophys.o 00:07:36.421 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:36.421 CXX test/cpp_headers/bdev_zone.o 00:07:36.421 CC test/event/reactor/reactor.o 00:07:36.421 LINK verify 00:07:36.679 CXX test/cpp_headers/bit_array.o 00:07:36.679 LINK blobcli 00:07:36.679 LINK vtophys 00:07:36.679 LINK event_perf 00:07:36.680 LINK env_dpdk_post_init 00:07:36.680 LINK reactor 00:07:36.680 CXX test/cpp_headers/bit_pool.o 00:07:36.680 CC test/env/memory/memory_ut.o 00:07:36.680 CC test/env/pci/pci_ut.o 00:07:36.939 CXX test/cpp_headers/blob_bdev.o 00:07:36.940 CC test/event/reactor_perf/reactor_perf.o 00:07:36.940 CC examples/nvme/hello_world/hello_world.o 00:07:36.940 CC examples/sock/hello_world/hello_sock.o 00:07:36.940 CC examples/vmd/lsvmd/lsvmd.o 00:07:36.940 CC test/lvol/esnap/esnap.o 00:07:36.940 LINK reactor_perf 00:07:36.940 LINK lsvmd 00:07:36.940 CXX test/cpp_headers/blobfs_bdev.o 00:07:36.940 LINK mem_callbacks 00:07:37.201 LINK pci_ut 00:07:37.201 LINK hello_world 00:07:37.201 LINK hello_sock 00:07:37.201 CC examples/vmd/led/led.o 00:07:37.201 LINK iscsi_fuzz 00:07:37.201 CXX test/cpp_headers/blobfs.o 00:07:37.201 CC test/event/app_repeat/app_repeat.o 00:07:37.201 CC examples/nvme/reconnect/reconnect.o 00:07:37.201 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:37.460 LINK led 00:07:37.460 CXX test/cpp_headers/blob.o 00:07:37.460 LINK app_repeat 00:07:37.460 CC examples/nvme/arbitration/arbitration.o 00:07:37.460 CC examples/nvmf/nvmf/nvmf.o 00:07:37.460 CXX test/cpp_headers/conf.o 00:07:37.460 CC test/nvme/aer/aer.o 00:07:37.460 CC examples/nvme/hotplug/hotplug.o 00:07:37.460 LINK reconnect 00:07:37.460 LINK memory_ut 00:07:37.719 CC test/event/scheduler/scheduler.o 00:07:37.719 CXX test/cpp_headers/config.o 00:07:37.719 CXX test/cpp_headers/cpuset.o 00:07:37.719 LINK arbitration 00:07:37.719 LINK nvmf 00:07:37.719 LINK hotplug 00:07:37.719 LINK nvme_manage 00:07:37.719 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:37.719 LINK aer 00:07:37.719 CXX test/cpp_headers/crc16.o 00:07:37.977 LINK scheduler 00:07:37.977 CC test/rpc_client/rpc_client_test.o 00:07:37.977 CC examples/nvme/abort/abort.o 00:07:37.977 CXX test/cpp_headers/crc32.o 00:07:37.977 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:37.977 LINK cmb_copy 00:07:37.977 CC test/nvme/reset/reset.o 00:07:37.977 CC test/nvme/sgl/sgl.o 00:07:37.977 CC test/thread/poller_perf/poller_perf.o 00:07:37.977 LINK rpc_client_test 00:07:37.977 CXX test/cpp_headers/crc64.o 00:07:38.236 LINK pmr_persistence 00:07:38.236 CC test/nvme/e2edp/nvme_dp.o 00:07:38.236 CC test/nvme/overhead/overhead.o 00:07:38.236 LINK poller_perf 00:07:38.236 CXX test/cpp_headers/dif.o 00:07:38.236 LINK reset 00:07:38.236 LINK abort 00:07:38.236 LINK sgl 00:07:38.236 CC examples/util/zipf/zipf.o 00:07:38.236 CC test/nvme/err_injection/err_injection.o 00:07:38.236 CXX test/cpp_headers/dma.o 00:07:38.495 CC test/nvme/startup/startup.o 00:07:38.495 LINK nvme_dp 00:07:38.495 LINK overhead 00:07:38.495 CXX test/cpp_headers/endian.o 00:07:38.495 CC test/nvme/reserve/reserve.o 00:07:38.495 LINK zipf 00:07:38.495 LINK err_injection 00:07:38.495 CC test/nvme/simple_copy/simple_copy.o 00:07:38.495 CC test/nvme/connect_stress/connect_stress.o 00:07:38.495 CXX test/cpp_headers/env_dpdk.o 00:07:38.495 LINK startup 00:07:38.755 CC test/nvme/boot_partition/boot_partition.o 00:07:38.755 CC test/nvme/compliance/nvme_compliance.o 00:07:38.755 CXX test/cpp_headers/env.o 00:07:38.755 LINK reserve 00:07:38.755 LINK connect_stress 00:07:38.755 LINK simple_copy 00:07:38.755 CC test/nvme/fused_ordering/fused_ordering.o 00:07:38.755 CC examples/thread/thread/thread_ex.o 00:07:38.755 LINK boot_partition 00:07:38.755 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:39.015 CXX test/cpp_headers/event.o 00:07:39.015 CXX test/cpp_headers/fd_group.o 00:07:39.015 LINK nvme_compliance 00:07:39.015 LINK fused_ordering 00:07:39.015 LINK doorbell_aers 00:07:39.015 CC test/nvme/fdp/fdp.o 00:07:39.015 LINK thread 00:07:39.015 CC examples/idxd/perf/perf.o 00:07:39.015 CXX test/cpp_headers/fd.o 00:07:39.015 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:39.274 CXX test/cpp_headers/file.o 00:07:39.274 CXX test/cpp_headers/ftl.o 00:07:39.274 CC test/nvme/cuse/cuse.o 00:07:39.274 CXX test/cpp_headers/gpt_spec.o 00:07:39.274 CXX test/cpp_headers/hexlify.o 00:07:39.274 CXX test/cpp_headers/histogram_data.o 00:07:39.274 LINK fdp 00:07:39.274 LINK interrupt_tgt 00:07:39.274 LINK idxd_perf 00:07:39.274 CXX test/cpp_headers/idxd.o 00:07:39.533 CXX test/cpp_headers/idxd_spec.o 00:07:39.533 CXX test/cpp_headers/init.o 00:07:39.533 CXX test/cpp_headers/ioat.o 00:07:39.533 CXX test/cpp_headers/ioat_spec.o 00:07:39.533 CXX test/cpp_headers/iscsi_spec.o 00:07:39.533 CXX test/cpp_headers/json.o 00:07:39.533 CXX test/cpp_headers/jsonrpc.o 00:07:39.533 CXX test/cpp_headers/likely.o 00:07:39.533 CXX test/cpp_headers/log.o 00:07:39.533 CXX test/cpp_headers/lvol.o 00:07:39.533 CXX test/cpp_headers/memory.o 00:07:39.792 CXX test/cpp_headers/mmio.o 00:07:39.792 CXX test/cpp_headers/nbd.o 00:07:39.792 CXX test/cpp_headers/notify.o 00:07:39.792 CXX test/cpp_headers/nvme.o 00:07:39.792 CXX test/cpp_headers/nvme_intel.o 00:07:39.792 CXX test/cpp_headers/nvme_ocssd.o 00:07:39.792 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:39.792 CXX test/cpp_headers/nvme_spec.o 00:07:39.792 CXX test/cpp_headers/nvme_zns.o 00:07:39.792 CXX test/cpp_headers/nvmf_cmd.o 00:07:40.051 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:40.051 CXX test/cpp_headers/nvmf.o 00:07:40.051 CXX test/cpp_headers/nvmf_spec.o 00:07:40.051 CXX test/cpp_headers/nvmf_transport.o 00:07:40.051 CXX test/cpp_headers/opal.o 00:07:40.051 CXX test/cpp_headers/opal_spec.o 00:07:40.051 CXX test/cpp_headers/pci_ids.o 00:07:40.311 CXX test/cpp_headers/pipe.o 00:07:40.311 CXX test/cpp_headers/queue.o 00:07:40.311 CXX test/cpp_headers/reduce.o 00:07:40.311 CXX test/cpp_headers/rpc.o 00:07:40.311 LINK cuse 00:07:40.311 CXX test/cpp_headers/scheduler.o 00:07:40.311 CXX test/cpp_headers/scsi.o 00:07:40.311 CXX test/cpp_headers/scsi_spec.o 00:07:40.311 CXX test/cpp_headers/sock.o 00:07:40.311 CXX test/cpp_headers/stdinc.o 00:07:40.311 CXX test/cpp_headers/string.o 00:07:40.311 CXX test/cpp_headers/thread.o 00:07:40.570 CXX test/cpp_headers/trace.o 00:07:40.570 CXX test/cpp_headers/trace_parser.o 00:07:40.570 CXX test/cpp_headers/tree.o 00:07:40.570 CXX test/cpp_headers/ublk.o 00:07:40.570 CXX test/cpp_headers/util.o 00:07:40.570 CXX test/cpp_headers/uuid.o 00:07:40.570 CXX test/cpp_headers/version.o 00:07:40.570 CXX test/cpp_headers/vfio_user_pci.o 00:07:40.570 CXX test/cpp_headers/vfio_user_spec.o 00:07:40.570 CXX test/cpp_headers/vhost.o 00:07:40.570 CXX test/cpp_headers/vmd.o 00:07:40.570 CXX test/cpp_headers/xor.o 00:07:40.570 CXX test/cpp_headers/zipf.o 00:07:41.506 LINK esnap 00:07:46.833 00:07:46.833 real 1m2.313s 00:07:46.833 user 6m9.467s 00:07:46.833 sys 1m27.520s 00:07:46.833 11:35:18 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:07:46.833 11:35:18 -- common/autotest_common.sh@10 -- $ set +x 00:07:46.833 ************************************ 00:07:46.833 END TEST make 00:07:46.833 ************************************ 00:07:46.833 11:35:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:46.833 11:35:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:46.833 11:35:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:46.833 11:35:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:46.833 11:35:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:46.833 11:35:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:46.833 11:35:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:46.833 11:35:19 -- scripts/common.sh@335 -- # IFS=.-: 00:07:46.833 11:35:19 -- scripts/common.sh@335 -- # read -ra ver1 00:07:46.833 11:35:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.833 11:35:19 -- scripts/common.sh@336 -- # read -ra ver2 00:07:46.833 11:35:19 -- scripts/common.sh@337 -- # local 'op=<' 00:07:46.833 11:35:19 -- scripts/common.sh@339 -- # ver1_l=2 00:07:46.833 11:35:19 -- scripts/common.sh@340 -- # ver2_l=1 00:07:46.833 11:35:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:46.833 11:35:19 -- scripts/common.sh@343 -- # case "$op" in 00:07:46.833 11:35:19 -- scripts/common.sh@344 -- # : 1 00:07:46.833 11:35:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:46.833 11:35:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.833 11:35:19 -- scripts/common.sh@364 -- # decimal 1 00:07:46.833 11:35:19 -- scripts/common.sh@352 -- # local d=1 00:07:46.833 11:35:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.833 11:35:19 -- scripts/common.sh@354 -- # echo 1 00:07:46.833 11:35:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:46.833 11:35:19 -- scripts/common.sh@365 -- # decimal 2 00:07:46.833 11:35:19 -- scripts/common.sh@352 -- # local d=2 00:07:46.833 11:35:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.833 11:35:19 -- scripts/common.sh@354 -- # echo 2 00:07:46.833 11:35:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:46.833 11:35:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:46.833 11:35:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:46.833 11:35:19 -- scripts/common.sh@367 -- # return 0 00:07:46.833 11:35:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.833 11:35:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:46.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.833 --rc genhtml_branch_coverage=1 00:07:46.833 --rc genhtml_function_coverage=1 00:07:46.833 --rc genhtml_legend=1 00:07:46.833 --rc geninfo_all_blocks=1 00:07:46.833 --rc geninfo_unexecuted_blocks=1 00:07:46.833 00:07:46.833 ' 00:07:46.833 11:35:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:46.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.833 --rc genhtml_branch_coverage=1 00:07:46.833 --rc genhtml_function_coverage=1 00:07:46.833 --rc genhtml_legend=1 00:07:46.833 --rc geninfo_all_blocks=1 00:07:46.833 --rc geninfo_unexecuted_blocks=1 00:07:46.833 00:07:46.833 ' 00:07:46.833 11:35:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:46.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.833 --rc genhtml_branch_coverage=1 00:07:46.833 --rc genhtml_function_coverage=1 00:07:46.833 --rc genhtml_legend=1 00:07:46.833 --rc geninfo_all_blocks=1 00:07:46.833 --rc geninfo_unexecuted_blocks=1 00:07:46.833 00:07:46.833 ' 00:07:46.833 11:35:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:46.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.833 --rc genhtml_branch_coverage=1 00:07:46.833 --rc genhtml_function_coverage=1 00:07:46.833 --rc genhtml_legend=1 00:07:46.833 --rc geninfo_all_blocks=1 00:07:46.833 --rc geninfo_unexecuted_blocks=1 00:07:46.833 00:07:46.833 ' 00:07:46.833 11:35:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.833 11:35:19 -- nvmf/common.sh@7 -- # uname -s 00:07:46.833 11:35:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.833 11:35:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.833 11:35:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.833 11:35:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.833 11:35:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.833 11:35:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.833 11:35:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.833 11:35:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.833 11:35:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.833 11:35:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.833 11:35:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:07:46.833 11:35:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:07:46.833 11:35:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.833 11:35:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.833 11:35:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.833 11:35:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.833 11:35:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.833 11:35:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.833 11:35:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.833 11:35:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.833 11:35:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.833 11:35:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.833 11:35:19 -- paths/export.sh@5 -- # export PATH 00:07:46.833 11:35:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.833 11:35:19 -- nvmf/common.sh@46 -- # : 0 00:07:46.833 11:35:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:46.833 11:35:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:46.833 11:35:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:46.833 11:35:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.833 11:35:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.833 11:35:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:46.833 11:35:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:46.833 11:35:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:46.833 11:35:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:46.833 11:35:19 -- spdk/autotest.sh@32 -- # uname -s 00:07:46.833 11:35:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:46.833 11:35:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:46.833 11:35:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:46.833 11:35:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:46.833 11:35:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:46.833 11:35:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:46.833 11:35:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:46.833 11:35:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:46.833 11:35:19 -- spdk/autotest.sh@48 -- # udevadm_pid=49943 00:07:46.833 11:35:19 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:07:46.833 11:35:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:46.833 11:35:19 -- spdk/autotest.sh@54 -- # echo 49946 00:07:46.834 11:35:19 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:46.834 11:35:19 -- spdk/autotest.sh@56 -- # echo 49947 00:07:46.834 11:35:19 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:46.834 11:35:19 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:07:46.834 11:35:19 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:46.834 11:35:19 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:07:46.834 11:35:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.834 11:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.834 11:35:19 -- spdk/autotest.sh@70 -- # create_test_list 00:07:46.834 11:35:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:07:46.834 11:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.834 11:35:19 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:46.834 11:35:19 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:46.834 11:35:19 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:07:46.834 11:35:19 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:46.834 11:35:19 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:07:46.834 11:35:19 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:07:46.834 11:35:19 -- common/autotest_common.sh@1450 -- # uname 00:07:46.834 11:35:19 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:07:46.834 11:35:19 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:07:46.834 11:35:19 -- common/autotest_common.sh@1470 -- # uname 00:07:46.834 11:35:19 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:07:46.834 11:35:19 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:07:46.834 11:35:19 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:46.834 lcov: LCOV version 1.15 00:07:46.834 11:35:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:54.958 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:54.958 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:54.958 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:54.958 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:54.958 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:54.958 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:08:16.933 11:35:49 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:08:16.933 11:35:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.933 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.933 11:35:49 -- spdk/autotest.sh@89 -- # rm -f 00:08:16.933 11:35:49 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:17.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:17.451 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:08:17.451 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:08:17.451 11:35:50 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:08:17.451 11:35:50 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:17.451 11:35:50 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:17.451 11:35:50 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:17.451 11:35:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:17.451 11:35:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:17.451 11:35:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:17.451 11:35:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:17.451 11:35:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2 00:08:17.451 11:35:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n2 00:08:17.451 11:35:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:17.451 11:35:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3 00:08:17.451 11:35:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n3 00:08:17.451 11:35:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:17.451 11:35:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:17.451 11:35:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:17.451 11:35:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:17.451 11:35:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:17.451 11:35:50 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:08:17.451 11:35:50 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme1n1 00:08:17.451 11:35:50 -- spdk/autotest.sh@108 -- # grep -v p 00:08:17.451 11:35:50 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:17.451 11:35:50 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:17.451 11:35:50 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:08:17.451 11:35:50 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:08:17.451 11:35:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:17.451 No valid GPT data, bailing 00:08:17.451 11:35:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:17.451 11:35:50 -- scripts/common.sh@393 -- # pt= 00:08:17.451 11:35:50 -- scripts/common.sh@394 -- # return 1 00:08:17.451 11:35:50 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:17.451 1+0 records in 00:08:17.451 1+0 records out 00:08:17.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695534 s, 151 MB/s 00:08:17.451 11:35:50 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:17.451 11:35:50 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:17.451 11:35:50 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n2 00:08:17.451 11:35:50 -- scripts/common.sh@380 -- # local block=/dev/nvme0n2 pt 00:08:17.451 11:35:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:08:17.709 No valid GPT data, bailing 00:08:17.709 11:35:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:17.709 11:35:50 -- scripts/common.sh@393 -- # pt= 00:08:17.709 11:35:50 -- scripts/common.sh@394 -- # return 1 00:08:17.709 11:35:50 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:08:17.709 1+0 records in 00:08:17.709 1+0 records out 00:08:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00711807 s, 147 MB/s 00:08:17.709 11:35:50 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:17.709 11:35:50 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:17.709 11:35:50 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n3 00:08:17.709 11:35:50 -- scripts/common.sh@380 -- # local block=/dev/nvme0n3 pt 00:08:17.709 11:35:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:08:17.709 No valid GPT data, bailing 00:08:17.709 11:35:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:17.709 11:35:50 -- scripts/common.sh@393 -- # pt= 00:08:17.709 11:35:50 -- scripts/common.sh@394 -- # return 1 00:08:17.709 11:35:50 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:08:17.709 1+0 records in 00:08:17.709 1+0 records out 00:08:17.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00690778 s, 152 MB/s 00:08:17.710 11:35:50 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:17.710 11:35:50 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:17.710 11:35:50 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:08:17.710 11:35:50 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:08:17.710 11:35:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:17.710 No valid GPT data, bailing 00:08:17.710 11:35:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:17.710 11:35:50 -- scripts/common.sh@393 -- # pt= 00:08:17.710 11:35:50 -- scripts/common.sh@394 -- # return 1 00:08:17.710 11:35:50 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:17.710 1+0 records in 00:08:17.710 1+0 records out 00:08:17.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672877 s, 156 MB/s 00:08:17.710 11:35:50 -- spdk/autotest.sh@116 -- # sync 00:08:17.968 11:35:50 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:17.968 11:35:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:17.968 11:35:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:20.506 11:35:53 -- spdk/autotest.sh@122 -- # uname -s 00:08:20.506 11:35:53 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:08:20.506 11:35:53 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:20.506 11:35:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.506 11:35:53 -- common/autotest_common.sh@10 -- # set +x 00:08:20.506 ************************************ 00:08:20.506 START TEST setup.sh 00:08:20.506 ************************************ 00:08:20.506 11:35:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:20.506 * Looking for test storage... 00:08:20.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:20.506 11:35:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.506 11:35:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.506 11:35:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.506 11:35:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.506 11:35:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.506 11:35:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.506 11:35:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.506 11:35:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.506 11:35:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.506 11:35:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.506 11:35:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.506 11:35:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.506 11:35:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.506 11:35:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.506 11:35:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.506 11:35:53 -- scripts/common.sh@344 -- # : 1 00:08:20.506 11:35:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.506 11:35:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.506 11:35:53 -- scripts/common.sh@364 -- # decimal 1 00:08:20.506 11:35:53 -- scripts/common.sh@352 -- # local d=1 00:08:20.506 11:35:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.506 11:35:53 -- scripts/common.sh@354 -- # echo 1 00:08:20.506 11:35:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.506 11:35:53 -- scripts/common.sh@365 -- # decimal 2 00:08:20.506 11:35:53 -- scripts/common.sh@352 -- # local d=2 00:08:20.506 11:35:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.506 11:35:53 -- scripts/common.sh@354 -- # echo 2 00:08:20.506 11:35:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.506 11:35:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.506 11:35:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.506 11:35:53 -- scripts/common.sh@367 -- # return 0 00:08:20.506 11:35:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.506 --rc genhtml_branch_coverage=1 00:08:20.506 --rc genhtml_function_coverage=1 00:08:20.506 --rc genhtml_legend=1 00:08:20.506 --rc geninfo_all_blocks=1 00:08:20.506 --rc geninfo_unexecuted_blocks=1 00:08:20.506 00:08:20.506 ' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.506 --rc genhtml_branch_coverage=1 00:08:20.506 --rc genhtml_function_coverage=1 00:08:20.506 --rc genhtml_legend=1 00:08:20.506 --rc geninfo_all_blocks=1 00:08:20.506 --rc geninfo_unexecuted_blocks=1 00:08:20.506 00:08:20.506 ' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.506 --rc genhtml_branch_coverage=1 00:08:20.506 --rc genhtml_function_coverage=1 00:08:20.506 --rc genhtml_legend=1 00:08:20.506 --rc geninfo_all_blocks=1 00:08:20.506 --rc geninfo_unexecuted_blocks=1 00:08:20.506 00:08:20.506 ' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.506 --rc genhtml_branch_coverage=1 00:08:20.506 --rc genhtml_function_coverage=1 00:08:20.506 --rc genhtml_legend=1 00:08:20.506 --rc geninfo_all_blocks=1 00:08:20.506 --rc geninfo_unexecuted_blocks=1 00:08:20.506 00:08:20.506 ' 00:08:20.506 11:35:53 -- setup/test-setup.sh@10 -- # uname -s 00:08:20.506 11:35:53 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:20.506 11:35:53 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:20.506 11:35:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.506 11:35:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.506 11:35:53 -- common/autotest_common.sh@10 -- # set +x 00:08:20.506 ************************************ 00:08:20.506 START TEST acl 00:08:20.506 ************************************ 00:08:20.506 11:35:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:20.506 * Looking for test storage... 00:08:20.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:20.506 11:35:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.506 11:35:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.506 11:35:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.765 11:35:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.765 11:35:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.765 11:35:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.765 11:35:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.765 11:35:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.765 11:35:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.765 11:35:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.765 11:35:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.765 11:35:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.765 11:35:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.765 11:35:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.765 11:35:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.765 11:35:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.765 11:35:53 -- scripts/common.sh@344 -- # : 1 00:08:20.765 11:35:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.765 11:35:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.765 11:35:53 -- scripts/common.sh@364 -- # decimal 1 00:08:20.765 11:35:53 -- scripts/common.sh@352 -- # local d=1 00:08:20.765 11:35:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.765 11:35:53 -- scripts/common.sh@354 -- # echo 1 00:08:20.765 11:35:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.765 11:35:53 -- scripts/common.sh@365 -- # decimal 2 00:08:20.765 11:35:53 -- scripts/common.sh@352 -- # local d=2 00:08:20.765 11:35:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.765 11:35:53 -- scripts/common.sh@354 -- # echo 2 00:08:20.765 11:35:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.765 11:35:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.765 11:35:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.765 11:35:53 -- scripts/common.sh@367 -- # return 0 00:08:20.765 11:35:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.765 11:35:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.765 --rc genhtml_branch_coverage=1 00:08:20.766 --rc genhtml_function_coverage=1 00:08:20.766 --rc genhtml_legend=1 00:08:20.766 --rc geninfo_all_blocks=1 00:08:20.766 --rc geninfo_unexecuted_blocks=1 00:08:20.766 00:08:20.766 ' 00:08:20.766 11:35:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.766 --rc genhtml_branch_coverage=1 00:08:20.766 --rc genhtml_function_coverage=1 00:08:20.766 --rc genhtml_legend=1 00:08:20.766 --rc geninfo_all_blocks=1 00:08:20.766 --rc geninfo_unexecuted_blocks=1 00:08:20.766 00:08:20.766 ' 00:08:20.766 11:35:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.766 --rc genhtml_branch_coverage=1 00:08:20.766 --rc genhtml_function_coverage=1 00:08:20.766 --rc genhtml_legend=1 00:08:20.766 --rc geninfo_all_blocks=1 00:08:20.766 --rc geninfo_unexecuted_blocks=1 00:08:20.766 00:08:20.766 ' 00:08:20.766 11:35:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.766 --rc genhtml_branch_coverage=1 00:08:20.766 --rc genhtml_function_coverage=1 00:08:20.766 --rc genhtml_legend=1 00:08:20.766 --rc geninfo_all_blocks=1 00:08:20.766 --rc geninfo_unexecuted_blocks=1 00:08:20.766 00:08:20.766 ' 00:08:20.766 11:35:53 -- setup/acl.sh@10 -- # get_zoned_devs 00:08:20.766 11:35:53 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:20.766 11:35:53 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:20.766 11:35:53 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:20.766 11:35:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:20.766 11:35:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:20.766 11:35:53 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:20.766 11:35:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:20.766 11:35:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2 00:08:20.766 11:35:53 -- common/autotest_common.sh@1657 -- # local device=nvme0n2 00:08:20.766 11:35:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:20.766 11:35:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3 00:08:20.766 11:35:53 -- common/autotest_common.sh@1657 -- # local device=nvme0n3 00:08:20.766 11:35:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:20.766 11:35:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:20.766 11:35:53 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:20.766 11:35:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:20.766 11:35:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:20.766 11:35:53 -- setup/acl.sh@12 -- # devs=() 00:08:20.766 11:35:53 -- setup/acl.sh@12 -- # declare -a devs 00:08:20.766 11:35:53 -- setup/acl.sh@13 -- # drivers=() 00:08:20.766 11:35:53 -- setup/acl.sh@13 -- # declare -A drivers 00:08:20.766 11:35:53 -- setup/acl.sh@51 -- # setup reset 00:08:20.766 11:35:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:20.766 11:35:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:21.702 11:35:54 -- setup/acl.sh@52 -- # collect_setup_devs 00:08:21.702 11:35:54 -- setup/acl.sh@16 -- # local dev driver 00:08:21.702 11:35:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:21.702 11:35:54 -- setup/acl.sh@15 -- # setup output status 00:08:21.702 11:35:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:21.702 11:35:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:21.702 Hugepages 00:08:21.702 node hugesize free / total 00:08:21.702 11:35:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:21.702 11:35:54 -- setup/acl.sh@19 -- # continue 00:08:21.702 11:35:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:21.702 00:08:21.702 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:21.702 11:35:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:21.702 11:35:54 -- setup/acl.sh@19 -- # continue 00:08:21.702 11:35:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:21.962 11:35:54 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:21.962 11:35:54 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:21.962 11:35:54 -- setup/acl.sh@20 -- # continue 00:08:21.962 11:35:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:21.962 11:35:54 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:08:21.962 11:35:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:21.962 11:35:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:21.962 11:35:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:21.962 11:35:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:21.962 11:35:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:22.219 11:35:55 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:08:22.219 11:35:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:22.219 11:35:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:22.219 11:35:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:22.219 11:35:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:22.219 11:35:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:22.219 11:35:55 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:08:22.219 11:35:55 -- setup/acl.sh@54 -- # run_test denied denied 00:08:22.219 11:35:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.219 11:35:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.219 11:35:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.219 ************************************ 00:08:22.219 START TEST denied 00:08:22.219 ************************************ 00:08:22.219 11:35:55 -- common/autotest_common.sh@1114 -- # denied 00:08:22.219 11:35:55 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:08:22.219 11:35:55 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:08:22.219 11:35:55 -- setup/acl.sh@38 -- # setup output config 00:08:22.219 11:35:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:22.219 11:35:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:23.152 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:08:23.152 11:35:55 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:08:23.152 11:35:55 -- setup/acl.sh@28 -- # local dev driver 00:08:23.152 11:35:55 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:23.152 11:35:55 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:08:23.152 11:35:55 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:08:23.152 11:35:55 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:23.152 11:35:55 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:23.152 11:35:55 -- setup/acl.sh@41 -- # setup reset 00:08:23.152 11:35:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:23.152 11:35:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:23.471 00:08:23.471 real 0m1.437s 00:08:23.471 user 0m0.547s 00:08:23.471 sys 0m0.868s 00:08:23.471 11:35:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.471 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.471 ************************************ 00:08:23.471 END TEST denied 00:08:23.471 ************************************ 00:08:23.471 11:35:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:23.471 11:35:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.471 11:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.471 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 ************************************ 00:08:23.729 START TEST allowed 00:08:23.729 ************************************ 00:08:23.729 11:35:56 -- common/autotest_common.sh@1114 -- # allowed 00:08:23.729 11:35:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:08:23.729 11:35:56 -- setup/acl.sh@45 -- # setup output config 00:08:23.729 11:35:56 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:08:23.729 11:35:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:23.729 11:35:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:24.671 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:24.671 11:35:57 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:08:24.671 11:35:57 -- setup/acl.sh@28 -- # local dev driver 00:08:24.671 11:35:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:24.671 11:35:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:08:24.671 11:35:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:08:24.671 11:35:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:24.671 11:35:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:24.671 11:35:57 -- setup/acl.sh@48 -- # setup reset 00:08:24.671 11:35:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:24.671 11:35:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:25.238 00:08:25.238 real 0m1.703s 00:08:25.238 user 0m0.697s 00:08:25.238 sys 0m1.036s 00:08:25.238 11:35:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.238 11:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.238 ************************************ 00:08:25.238 END TEST allowed 00:08:25.238 ************************************ 00:08:25.238 00:08:25.238 real 0m4.866s 00:08:25.238 user 0m1.987s 00:08:25.238 sys 0m2.941s 00:08:25.238 11:35:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.238 11:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.238 ************************************ 00:08:25.238 END TEST acl 00:08:25.238 ************************************ 00:08:25.498 11:35:58 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:25.498 11:35:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.498 11:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.498 11:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.498 ************************************ 00:08:25.498 START TEST hugepages 00:08:25.498 ************************************ 00:08:25.498 11:35:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:25.498 * Looking for test storage... 00:08:25.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:25.498 11:35:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:25.498 11:35:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:25.498 11:35:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:25.498 11:35:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:25.498 11:35:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:25.498 11:35:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:25.498 11:35:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:25.498 11:35:58 -- scripts/common.sh@335 -- # IFS=.-: 00:08:25.498 11:35:58 -- scripts/common.sh@335 -- # read -ra ver1 00:08:25.498 11:35:58 -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.498 11:35:58 -- scripts/common.sh@336 -- # read -ra ver2 00:08:25.498 11:35:58 -- scripts/common.sh@337 -- # local 'op=<' 00:08:25.498 11:35:58 -- scripts/common.sh@339 -- # ver1_l=2 00:08:25.498 11:35:58 -- scripts/common.sh@340 -- # ver2_l=1 00:08:25.498 11:35:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:25.498 11:35:58 -- scripts/common.sh@343 -- # case "$op" in 00:08:25.498 11:35:58 -- scripts/common.sh@344 -- # : 1 00:08:25.498 11:35:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:25.498 11:35:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.498 11:35:58 -- scripts/common.sh@364 -- # decimal 1 00:08:25.498 11:35:58 -- scripts/common.sh@352 -- # local d=1 00:08:25.498 11:35:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.498 11:35:58 -- scripts/common.sh@354 -- # echo 1 00:08:25.498 11:35:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:25.498 11:35:58 -- scripts/common.sh@365 -- # decimal 2 00:08:25.498 11:35:58 -- scripts/common.sh@352 -- # local d=2 00:08:25.498 11:35:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.498 11:35:58 -- scripts/common.sh@354 -- # echo 2 00:08:25.498 11:35:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:25.498 11:35:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:25.498 11:35:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:25.498 11:35:58 -- scripts/common.sh@367 -- # return 0 00:08:25.498 11:35:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.498 11:35:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.498 --rc genhtml_branch_coverage=1 00:08:25.498 --rc genhtml_function_coverage=1 00:08:25.498 --rc genhtml_legend=1 00:08:25.498 --rc geninfo_all_blocks=1 00:08:25.498 --rc geninfo_unexecuted_blocks=1 00:08:25.498 00:08:25.498 ' 00:08:25.498 11:35:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.498 --rc genhtml_branch_coverage=1 00:08:25.498 --rc genhtml_function_coverage=1 00:08:25.498 --rc genhtml_legend=1 00:08:25.498 --rc geninfo_all_blocks=1 00:08:25.498 --rc geninfo_unexecuted_blocks=1 00:08:25.498 00:08:25.498 ' 00:08:25.498 11:35:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.498 --rc genhtml_branch_coverage=1 00:08:25.498 --rc genhtml_function_coverage=1 00:08:25.498 --rc genhtml_legend=1 00:08:25.498 --rc geninfo_all_blocks=1 00:08:25.498 --rc geninfo_unexecuted_blocks=1 00:08:25.498 00:08:25.498 ' 00:08:25.498 11:35:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.498 --rc genhtml_branch_coverage=1 00:08:25.498 --rc genhtml_function_coverage=1 00:08:25.498 --rc genhtml_legend=1 00:08:25.498 --rc geninfo_all_blocks=1 00:08:25.498 --rc geninfo_unexecuted_blocks=1 00:08:25.498 00:08:25.498 ' 00:08:25.498 11:35:58 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:25.498 11:35:58 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:25.498 11:35:58 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:25.498 11:35:58 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:25.498 11:35:58 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:25.498 11:35:58 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:25.498 11:35:58 -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:25.498 11:35:58 -- setup/common.sh@18 -- # local node= 00:08:25.498 11:35:58 -- setup/common.sh@19 -- # local var val 00:08:25.498 11:35:58 -- setup/common.sh@20 -- # local mem_f mem 00:08:25.498 11:35:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:25.498 11:35:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:25.498 11:35:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:25.498 11:35:58 -- setup/common.sh@28 -- # mapfile -t mem 00:08:25.498 11:35:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:25.498 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.498 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 5864696 kB' 'MemAvailable: 7376032 kB' 'Buffers: 2684 kB' 'Cached: 1722056 kB' 'SwapCached: 0 kB' 'Active: 496892 kB' 'Inactive: 1345044 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50900 kB' 'Shmem: 10508 kB' 'KReclaimable: 68104 kB' 'Slab: 165960 kB' 'SReclaimable: 68104 kB' 'SUnreclaim: 97856 kB' 'KernelStack: 6512 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 322020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.499 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.499 11:35:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.760 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.760 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # continue 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # IFS=': ' 00:08:25.761 11:35:58 -- setup/common.sh@31 -- # read -r var val _ 00:08:25.761 11:35:58 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:25.761 11:35:58 -- setup/common.sh@33 -- # echo 2048 00:08:25.761 11:35:58 -- setup/common.sh@33 -- # return 0 00:08:25.761 11:35:58 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:25.761 11:35:58 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:25.761 11:35:58 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:25.761 11:35:58 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:25.761 11:35:58 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:25.761 11:35:58 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:25.761 11:35:58 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:25.761 11:35:58 -- setup/hugepages.sh@207 -- # get_nodes 00:08:25.761 11:35:58 -- setup/hugepages.sh@27 -- # local node 00:08:25.761 11:35:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:25.761 11:35:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:25.761 11:35:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:25.761 11:35:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:25.761 11:35:58 -- setup/hugepages.sh@208 -- # clear_hp 00:08:25.761 11:35:58 -- setup/hugepages.sh@37 -- # local node hp 00:08:25.761 11:35:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:25.761 11:35:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:25.761 11:35:58 -- setup/hugepages.sh@41 -- # echo 0 00:08:25.761 11:35:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:25.761 11:35:58 -- setup/hugepages.sh@41 -- # echo 0 00:08:25.761 11:35:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:25.761 11:35:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:25.761 11:35:58 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:25.761 11:35:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.761 11:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.761 11:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.761 ************************************ 00:08:25.761 START TEST default_setup 00:08:25.761 ************************************ 00:08:25.761 11:35:58 -- common/autotest_common.sh@1114 -- # default_setup 00:08:25.761 11:35:58 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:25.761 11:35:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:25.761 11:35:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:25.761 11:35:58 -- setup/hugepages.sh@51 -- # shift 00:08:25.761 11:35:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:25.761 11:35:58 -- setup/hugepages.sh@52 -- # local node_ids 00:08:25.761 11:35:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:25.761 11:35:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:25.761 11:35:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:25.761 11:35:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:25.761 11:35:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:25.761 11:35:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:25.761 11:35:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:25.761 11:35:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:25.761 11:35:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:25.761 11:35:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:25.761 11:35:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:25.761 11:35:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:25.761 11:35:58 -- setup/hugepages.sh@73 -- # return 0 00:08:25.761 11:35:58 -- setup/hugepages.sh@137 -- # setup output 00:08:25.761 11:35:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:25.761 11:35:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:26.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:26.593 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.593 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.593 11:35:59 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:26.593 11:35:59 -- setup/hugepages.sh@89 -- # local node 00:08:26.593 11:35:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:26.593 11:35:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:26.593 11:35:59 -- setup/hugepages.sh@92 -- # local surp 00:08:26.593 11:35:59 -- setup/hugepages.sh@93 -- # local resv 00:08:26.593 11:35:59 -- setup/hugepages.sh@94 -- # local anon 00:08:26.593 11:35:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:26.593 11:35:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:26.593 11:35:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:26.593 11:35:59 -- setup/common.sh@18 -- # local node= 00:08:26.593 11:35:59 -- setup/common.sh@19 -- # local var val 00:08:26.593 11:35:59 -- setup/common.sh@20 -- # local mem_f mem 00:08:26.593 11:35:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:26.593 11:35:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:26.593 11:35:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:26.593 11:35:59 -- setup/common.sh@28 -- # mapfile -t mem 00:08:26.593 11:35:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7976300 kB' 'MemAvailable: 9487456 kB' 'Buffers: 2684 kB' 'Cached: 1722044 kB' 'SwapCached: 0 kB' 'Active: 498336 kB' 'Inactive: 1345048 kB' 'Active(anon): 129148 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 51044 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165640 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97908 kB' 'KernelStack: 6464 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.593 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.593 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.594 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.594 11:35:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:26.594 11:35:59 -- setup/common.sh@33 -- # echo 0 00:08:26.594 11:35:59 -- setup/common.sh@33 -- # return 0 00:08:26.594 11:35:59 -- setup/hugepages.sh@97 -- # anon=0 00:08:26.594 11:35:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:26.594 11:35:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:26.594 11:35:59 -- setup/common.sh@18 -- # local node= 00:08:26.594 11:35:59 -- setup/common.sh@19 -- # local var val 00:08:26.594 11:35:59 -- setup/common.sh@20 -- # local mem_f mem 00:08:26.594 11:35:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:26.594 11:35:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:26.594 11:35:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:26.594 11:35:59 -- setup/common.sh@28 -- # mapfile -t mem 00:08:26.594 11:35:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:26.594 11:35:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7976300 kB' 'MemAvailable: 9487456 kB' 'Buffers: 2684 kB' 'Cached: 1722044 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 1345048 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 50916 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165636 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97904 kB' 'KernelStack: 6480 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.595 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.595 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.596 11:35:59 -- setup/common.sh@33 -- # echo 0 00:08:26.596 11:35:59 -- setup/common.sh@33 -- # return 0 00:08:26.596 11:35:59 -- setup/hugepages.sh@99 -- # surp=0 00:08:26.596 11:35:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:26.596 11:35:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:26.596 11:35:59 -- setup/common.sh@18 -- # local node= 00:08:26.596 11:35:59 -- setup/common.sh@19 -- # local var val 00:08:26.596 11:35:59 -- setup/common.sh@20 -- # local mem_f mem 00:08:26.596 11:35:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:26.596 11:35:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:26.596 11:35:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:26.596 11:35:59 -- setup/common.sh@28 -- # mapfile -t mem 00:08:26.596 11:35:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:26.596 11:35:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7976300 kB' 'MemAvailable: 9487456 kB' 'Buffers: 2684 kB' 'Cached: 1722044 kB' 'SwapCached: 0 kB' 'Active: 497872 kB' 'Inactive: 1345048 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 50916 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165640 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97908 kB' 'KernelStack: 6464 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.596 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.596 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.859 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.859 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:26.860 11:35:59 -- setup/common.sh@33 -- # echo 0 00:08:26.860 11:35:59 -- setup/common.sh@33 -- # return 0 00:08:26.860 11:35:59 -- setup/hugepages.sh@100 -- # resv=0 00:08:26.860 11:35:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:26.860 nr_hugepages=1024 00:08:26.860 resv_hugepages=0 00:08:26.860 11:35:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:26.860 surplus_hugepages=0 00:08:26.860 11:35:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:26.860 anon_hugepages=0 00:08:26.860 11:35:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:26.860 11:35:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:26.860 11:35:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:26.860 11:35:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:26.860 11:35:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:26.860 11:35:59 -- setup/common.sh@18 -- # local node= 00:08:26.860 11:35:59 -- setup/common.sh@19 -- # local var val 00:08:26.860 11:35:59 -- setup/common.sh@20 -- # local mem_f mem 00:08:26.860 11:35:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:26.860 11:35:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:26.860 11:35:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:26.860 11:35:59 -- setup/common.sh@28 -- # mapfile -t mem 00:08:26.860 11:35:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7976300 kB' 'MemAvailable: 9487456 kB' 'Buffers: 2684 kB' 'Cached: 1722044 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 1345048 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50916 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165640 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97908 kB' 'KernelStack: 6448 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.860 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.860 11:35:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:26.861 11:35:59 -- setup/common.sh@33 -- # echo 1024 00:08:26.861 11:35:59 -- setup/common.sh@33 -- # return 0 00:08:26.861 11:35:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:26.861 11:35:59 -- setup/hugepages.sh@112 -- # get_nodes 00:08:26.861 11:35:59 -- setup/hugepages.sh@27 -- # local node 00:08:26.861 11:35:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:26.861 11:35:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:26.861 11:35:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:26.861 11:35:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:26.861 11:35:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:26.861 11:35:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:26.861 11:35:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:26.861 11:35:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:26.861 11:35:59 -- setup/common.sh@18 -- # local node=0 00:08:26.861 11:35:59 -- setup/common.sh@19 -- # local var val 00:08:26.861 11:35:59 -- setup/common.sh@20 -- # local mem_f mem 00:08:26.861 11:35:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:26.861 11:35:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:26.861 11:35:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:26.861 11:35:59 -- setup/common.sh@28 -- # mapfile -t mem 00:08:26.861 11:35:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:26.861 11:35:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7976300 kB' 'MemUsed: 4262820 kB' 'SwapCached: 0 kB' 'Active: 497864 kB' 'Inactive: 1345048 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1724728 kB' 'Mapped: 50916 kB' 'AnonPages: 119784 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67732 kB' 'Slab: 165632 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.861 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.861 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # continue 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # IFS=': ' 00:08:26.862 11:35:59 -- setup/common.sh@31 -- # read -r var val _ 00:08:26.862 11:35:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:26.862 11:35:59 -- setup/common.sh@33 -- # echo 0 00:08:26.862 11:35:59 -- setup/common.sh@33 -- # return 0 00:08:26.862 11:35:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:26.862 11:35:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:26.862 11:35:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:26.862 11:35:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:26.862 11:35:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:26.862 node0=1024 expecting 1024 00:08:26.862 11:35:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:26.862 00:08:26.862 real 0m1.140s 00:08:26.862 user 0m0.487s 00:08:26.862 sys 0m0.613s 00:08:26.862 11:35:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.862 11:35:59 -- common/autotest_common.sh@10 -- # set +x 00:08:26.862 ************************************ 00:08:26.862 END TEST default_setup 00:08:26.862 ************************************ 00:08:26.862 11:35:59 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:26.862 11:35:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.862 11:35:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.862 11:35:59 -- common/autotest_common.sh@10 -- # set +x 00:08:26.862 ************************************ 00:08:26.862 START TEST per_node_1G_alloc 00:08:26.862 ************************************ 00:08:26.862 11:35:59 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:08:26.862 11:35:59 -- setup/hugepages.sh@143 -- # local IFS=, 00:08:26.862 11:35:59 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:26.862 11:35:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:26.862 11:35:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:26.862 11:35:59 -- setup/hugepages.sh@51 -- # shift 00:08:26.862 11:35:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:26.862 11:35:59 -- setup/hugepages.sh@52 -- # local node_ids 00:08:26.862 11:35:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:26.862 11:35:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:26.863 11:35:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:26.863 11:35:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:26.863 11:35:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:26.863 11:35:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:26.863 11:35:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:26.863 11:35:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:26.863 11:35:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:26.863 11:35:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:26.863 11:35:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:26.863 11:35:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:26.863 11:35:59 -- setup/hugepages.sh@73 -- # return 0 00:08:26.863 11:35:59 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:26.863 11:35:59 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:26.863 11:35:59 -- setup/hugepages.sh@146 -- # setup output 00:08:26.863 11:35:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:26.863 11:35:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:27.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.437 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:27.437 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:27.437 11:36:00 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:27.437 11:36:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:27.437 11:36:00 -- setup/hugepages.sh@89 -- # local node 00:08:27.437 11:36:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:27.437 11:36:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:27.437 11:36:00 -- setup/hugepages.sh@92 -- # local surp 00:08:27.437 11:36:00 -- setup/hugepages.sh@93 -- # local resv 00:08:27.437 11:36:00 -- setup/hugepages.sh@94 -- # local anon 00:08:27.438 11:36:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:27.438 11:36:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:27.438 11:36:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:27.438 11:36:00 -- setup/common.sh@18 -- # local node= 00:08:27.438 11:36:00 -- setup/common.sh@19 -- # local var val 00:08:27.438 11:36:00 -- setup/common.sh@20 -- # local mem_f mem 00:08:27.438 11:36:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:27.438 11:36:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:27.438 11:36:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:27.438 11:36:00 -- setup/common.sh@28 -- # mapfile -t mem 00:08:27.438 11:36:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9025980 kB' 'MemAvailable: 10537148 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498768 kB' 'Inactive: 1345060 kB' 'Active(anon): 129580 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120524 kB' 'Mapped: 51092 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165668 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97936 kB' 'KernelStack: 6504 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 323836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.438 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.438 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:27.439 11:36:00 -- setup/common.sh@33 -- # echo 0 00:08:27.439 11:36:00 -- setup/common.sh@33 -- # return 0 00:08:27.439 11:36:00 -- setup/hugepages.sh@97 -- # anon=0 00:08:27.439 11:36:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:27.439 11:36:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:27.439 11:36:00 -- setup/common.sh@18 -- # local node= 00:08:27.439 11:36:00 -- setup/common.sh@19 -- # local var val 00:08:27.439 11:36:00 -- setup/common.sh@20 -- # local mem_f mem 00:08:27.439 11:36:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:27.439 11:36:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:27.439 11:36:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:27.439 11:36:00 -- setup/common.sh@28 -- # mapfile -t mem 00:08:27.439 11:36:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9025980 kB' 'MemAvailable: 10537148 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498168 kB' 'Inactive: 1345060 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 51092 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165676 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97944 kB' 'KernelStack: 6456 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.439 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.439 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.440 11:36:00 -- setup/common.sh@33 -- # echo 0 00:08:27.440 11:36:00 -- setup/common.sh@33 -- # return 0 00:08:27.440 11:36:00 -- setup/hugepages.sh@99 -- # surp=0 00:08:27.440 11:36:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:27.440 11:36:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:27.440 11:36:00 -- setup/common.sh@18 -- # local node= 00:08:27.440 11:36:00 -- setup/common.sh@19 -- # local var val 00:08:27.440 11:36:00 -- setup/common.sh@20 -- # local mem_f mem 00:08:27.440 11:36:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:27.440 11:36:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:27.440 11:36:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:27.440 11:36:00 -- setup/common.sh@28 -- # mapfile -t mem 00:08:27.440 11:36:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9025980 kB' 'MemAvailable: 10537148 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 497960 kB' 'Inactive: 1345060 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119700 kB' 'Mapped: 51040 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165676 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97944 kB' 'KernelStack: 6492 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.440 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.440 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.441 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.441 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:27.442 11:36:00 -- setup/common.sh@33 -- # echo 0 00:08:27.442 11:36:00 -- setup/common.sh@33 -- # return 0 00:08:27.442 11:36:00 -- setup/hugepages.sh@100 -- # resv=0 00:08:27.442 11:36:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:27.442 nr_hugepages=512 00:08:27.442 resv_hugepages=0 00:08:27.442 11:36:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:27.442 surplus_hugepages=0 00:08:27.442 11:36:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:27.442 anon_hugepages=0 00:08:27.442 11:36:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:27.442 11:36:00 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:27.442 11:36:00 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:27.442 11:36:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:27.442 11:36:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:27.442 11:36:00 -- setup/common.sh@18 -- # local node= 00:08:27.442 11:36:00 -- setup/common.sh@19 -- # local var val 00:08:27.442 11:36:00 -- setup/common.sh@20 -- # local mem_f mem 00:08:27.442 11:36:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:27.442 11:36:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:27.442 11:36:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:27.442 11:36:00 -- setup/common.sh@28 -- # mapfile -t mem 00:08:27.442 11:36:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9025980 kB' 'MemAvailable: 10537148 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498120 kB' 'Inactive: 1345060 kB' 'Active(anon): 128932 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 51040 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165672 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97940 kB' 'KernelStack: 6460 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.442 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.442 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:27.443 11:36:00 -- setup/common.sh@33 -- # echo 512 00:08:27.443 11:36:00 -- setup/common.sh@33 -- # return 0 00:08:27.443 11:36:00 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:27.443 11:36:00 -- setup/hugepages.sh@112 -- # get_nodes 00:08:27.443 11:36:00 -- setup/hugepages.sh@27 -- # local node 00:08:27.443 11:36:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:27.443 11:36:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:27.443 11:36:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:27.443 11:36:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:27.443 11:36:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:27.443 11:36:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:27.443 11:36:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:27.443 11:36:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:27.443 11:36:00 -- setup/common.sh@18 -- # local node=0 00:08:27.443 11:36:00 -- setup/common.sh@19 -- # local var val 00:08:27.443 11:36:00 -- setup/common.sh@20 -- # local mem_f mem 00:08:27.443 11:36:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:27.443 11:36:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:27.443 11:36:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:27.443 11:36:00 -- setup/common.sh@28 -- # mapfile -t mem 00:08:27.443 11:36:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9025728 kB' 'MemUsed: 3213392 kB' 'SwapCached: 0 kB' 'Active: 497760 kB' 'Inactive: 1345060 kB' 'Active(anon): 128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1724732 kB' 'Mapped: 50912 kB' 'AnonPages: 119712 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67732 kB' 'Slab: 165744 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 98012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.443 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.443 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # continue 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:27.444 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:27.444 11:36:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:27.444 11:36:00 -- setup/common.sh@33 -- # echo 0 00:08:27.444 11:36:00 -- setup/common.sh@33 -- # return 0 00:08:27.444 11:36:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:27.444 11:36:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:27.444 11:36:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:27.444 11:36:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:27.444 11:36:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:27.444 node0=512 expecting 512 00:08:27.444 11:36:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:27.444 00:08:27.444 real 0m0.620s 00:08:27.444 user 0m0.278s 00:08:27.444 sys 0m0.382s 00:08:27.444 11:36:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.444 11:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.444 ************************************ 00:08:27.444 END TEST per_node_1G_alloc 00:08:27.444 ************************************ 00:08:27.444 11:36:00 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:27.444 11:36:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:27.444 11:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.444 11:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.444 ************************************ 00:08:27.444 START TEST even_2G_alloc 00:08:27.444 ************************************ 00:08:27.444 11:36:00 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:08:27.444 11:36:00 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:27.444 11:36:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:27.444 11:36:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:27.444 11:36:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:27.444 11:36:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:27.444 11:36:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:27.444 11:36:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:27.444 11:36:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:27.444 11:36:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:27.444 11:36:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:27.445 11:36:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:27.445 11:36:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:27.445 11:36:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:27.445 11:36:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:27.445 11:36:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:27.445 11:36:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:27.445 11:36:00 -- setup/hugepages.sh@83 -- # : 0 00:08:27.445 11:36:00 -- setup/hugepages.sh@84 -- # : 0 00:08:27.445 11:36:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:27.445 11:36:00 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:27.445 11:36:00 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:27.445 11:36:00 -- setup/hugepages.sh@153 -- # setup output 00:08:27.445 11:36:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:27.445 11:36:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:28.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.051 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:28.051 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:28.051 11:36:00 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:28.051 11:36:00 -- setup/hugepages.sh@89 -- # local node 00:08:28.051 11:36:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:28.051 11:36:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:28.051 11:36:00 -- setup/hugepages.sh@92 -- # local surp 00:08:28.051 11:36:00 -- setup/hugepages.sh@93 -- # local resv 00:08:28.051 11:36:00 -- setup/hugepages.sh@94 -- # local anon 00:08:28.051 11:36:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:28.051 11:36:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:28.051 11:36:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:28.051 11:36:00 -- setup/common.sh@18 -- # local node= 00:08:28.051 11:36:00 -- setup/common.sh@19 -- # local var val 00:08:28.051 11:36:00 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.051 11:36:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.051 11:36:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.051 11:36:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.051 11:36:00 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.051 11:36:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7979564 kB' 'MemAvailable: 9490732 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498540 kB' 'Inactive: 1345060 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120512 kB' 'Mapped: 51084 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165772 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 98040 kB' 'KernelStack: 6440 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:00 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:00 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.051 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.051 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.052 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.052 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.052 11:36:01 -- setup/hugepages.sh@97 -- # anon=0 00:08:28.052 11:36:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:28.052 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:28.052 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.052 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.052 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.052 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.052 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.052 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.052 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.052 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.052 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7979564 kB' 'MemAvailable: 9490732 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498008 kB' 'Inactive: 1345060 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120000 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165784 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 98052 kB' 'KernelStack: 6464 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.052 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.052 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.053 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.053 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.054 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.054 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.054 11:36:01 -- setup/hugepages.sh@99 -- # surp=0 00:08:28.054 11:36:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:28.054 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:28.054 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.054 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.054 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.054 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.054 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.054 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.054 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.054 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7979564 kB' 'MemAvailable: 9490732 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 497884 kB' 'Inactive: 1345060 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119804 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165764 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 98032 kB' 'KernelStack: 6448 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.054 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.054 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.055 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.055 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.055 11:36:01 -- setup/hugepages.sh@100 -- # resv=0 00:08:28.055 11:36:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:28.055 nr_hugepages=1024 00:08:28.055 resv_hugepages=0 00:08:28.055 11:36:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:28.055 surplus_hugepages=0 00:08:28.055 11:36:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:28.055 anon_hugepages=0 00:08:28.055 11:36:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:28.055 11:36:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:28.055 11:36:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:28.055 11:36:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:28.055 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:28.055 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.055 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.055 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.055 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.055 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.055 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.055 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.055 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.055 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7979564 kB' 'MemAvailable: 9490732 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 497884 kB' 'Inactive: 1345060 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165764 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 98032 kB' 'KernelStack: 6448 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.055 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.055 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.056 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.056 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.316 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.316 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.317 11:36:01 -- setup/common.sh@33 -- # echo 1024 00:08:28.317 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.317 11:36:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:28.317 11:36:01 -- setup/hugepages.sh@112 -- # get_nodes 00:08:28.317 11:36:01 -- setup/hugepages.sh@27 -- # local node 00:08:28.317 11:36:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:28.317 11:36:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:28.317 11:36:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:28.317 11:36:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:28.317 11:36:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:28.317 11:36:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:28.317 11:36:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:28.317 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:28.317 11:36:01 -- setup/common.sh@18 -- # local node=0 00:08:28.317 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.317 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.317 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.317 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:28.317 11:36:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:28.317 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.317 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7979564 kB' 'MemUsed: 4259556 kB' 'SwapCached: 0 kB' 'Active: 498036 kB' 'Inactive: 1345060 kB' 'Active(anon): 128848 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724732 kB' 'Mapped: 50912 kB' 'AnonPages: 119952 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67732 kB' 'Slab: 165764 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 98032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.317 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.317 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.318 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.318 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.318 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.318 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.318 11:36:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:28.318 11:36:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:28.318 11:36:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:28.318 11:36:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:28.318 node0=1024 expecting 1024 00:08:28.318 11:36:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:28.318 00:08:28.318 real 0m0.683s 00:08:28.318 user 0m0.320s 00:08:28.318 sys 0m0.409s 00:08:28.318 11:36:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.318 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.318 ************************************ 00:08:28.318 END TEST even_2G_alloc 00:08:28.318 ************************************ 00:08:28.318 11:36:01 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:28.318 11:36:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:28.318 11:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.318 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.318 ************************************ 00:08:28.318 START TEST odd_alloc 00:08:28.318 ************************************ 00:08:28.318 11:36:01 -- common/autotest_common.sh@1114 -- # odd_alloc 00:08:28.318 11:36:01 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:28.318 11:36:01 -- setup/hugepages.sh@49 -- # local size=2098176 00:08:28.318 11:36:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:28.318 11:36:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:28.318 11:36:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:28.318 11:36:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:28.318 11:36:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:28.318 11:36:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:28.318 11:36:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:28.318 11:36:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:28.318 11:36:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:28.318 11:36:01 -- setup/hugepages.sh@83 -- # : 0 00:08:28.318 11:36:01 -- setup/hugepages.sh@84 -- # : 0 00:08:28.318 11:36:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:28.318 11:36:01 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:28.318 11:36:01 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:28.318 11:36:01 -- setup/hugepages.sh@160 -- # setup output 00:08:28.318 11:36:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:28.318 11:36:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:28.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.840 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:28.840 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:28.840 11:36:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:28.840 11:36:01 -- setup/hugepages.sh@89 -- # local node 00:08:28.840 11:36:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:28.840 11:36:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:28.840 11:36:01 -- setup/hugepages.sh@92 -- # local surp 00:08:28.840 11:36:01 -- setup/hugepages.sh@93 -- # local resv 00:08:28.840 11:36:01 -- setup/hugepages.sh@94 -- # local anon 00:08:28.840 11:36:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:28.840 11:36:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:28.840 11:36:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:28.840 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.840 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.840 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.840 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.840 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.840 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.840 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.840 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.840 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7980816 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498184 kB' 'Inactive: 1345060 kB' 'Active(anon): 128996 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120084 kB' 'Mapped: 51040 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165672 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97924 kB' 'KernelStack: 6464 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.841 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.841 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:28.842 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.842 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.842 11:36:01 -- setup/hugepages.sh@97 -- # anon=0 00:08:28.842 11:36:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:28.842 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:28.842 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.842 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.842 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.842 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.842 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.842 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.842 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.842 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7980816 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498000 kB' 'Inactive: 1345060 kB' 'Active(anon): 128812 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119904 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165672 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97924 kB' 'KernelStack: 6448 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.842 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.842 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.843 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.843 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.843 11:36:01 -- setup/hugepages.sh@99 -- # surp=0 00:08:28.843 11:36:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:28.843 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:28.843 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.843 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.843 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.843 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.843 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.843 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.843 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.843 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7980816 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 497968 kB' 'Inactive: 1345060 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165672 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97924 kB' 'KernelStack: 6448 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.843 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.843 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.844 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.844 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:28.845 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.845 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.845 11:36:01 -- setup/hugepages.sh@100 -- # resv=0 00:08:28.845 11:36:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:28.845 nr_hugepages=1025 00:08:28.845 resv_hugepages=0 00:08:28.845 11:36:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:28.845 surplus_hugepages=0 00:08:28.845 11:36:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:28.845 anon_hugepages=0 00:08:28.845 11:36:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:28.845 11:36:01 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:28.845 11:36:01 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:28.845 11:36:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:28.845 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:28.845 11:36:01 -- setup/common.sh@18 -- # local node= 00:08:28.845 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.845 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.845 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.845 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.845 11:36:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.845 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.845 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7980816 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498048 kB' 'Inactive: 1345060 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165668 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97920 kB' 'KernelStack: 6464 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.845 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.845 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:28.846 11:36:01 -- setup/common.sh@33 -- # echo 1025 00:08:28.846 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.846 11:36:01 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:28.846 11:36:01 -- setup/hugepages.sh@112 -- # get_nodes 00:08:28.846 11:36:01 -- setup/hugepages.sh@27 -- # local node 00:08:28.846 11:36:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:28.846 11:36:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:28.846 11:36:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:28.846 11:36:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:28.846 11:36:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:28.846 11:36:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:28.846 11:36:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:28.846 11:36:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:28.846 11:36:01 -- setup/common.sh@18 -- # local node=0 00:08:28.846 11:36:01 -- setup/common.sh@19 -- # local var val 00:08:28.846 11:36:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:28.846 11:36:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.846 11:36:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:28.846 11:36:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:28.846 11:36:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.846 11:36:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7980816 kB' 'MemUsed: 4258304 kB' 'SwapCached: 0 kB' 'Active: 497724 kB' 'Inactive: 1345060 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724732 kB' 'Mapped: 50912 kB' 'AnonPages: 119884 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67748 kB' 'Slab: 165660 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.846 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.846 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # continue 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:28.847 11:36:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:28.847 11:36:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:28.847 11:36:01 -- setup/common.sh@33 -- # echo 0 00:08:28.847 11:36:01 -- setup/common.sh@33 -- # return 0 00:08:28.847 11:36:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:28.847 11:36:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:28.847 11:36:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:28.847 11:36:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:28.847 node0=1025 expecting 1025 00:08:28.847 11:36:01 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:28.847 11:36:01 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:28.847 00:08:28.847 real 0m0.665s 00:08:28.847 user 0m0.292s 00:08:28.847 sys 0m0.410s 00:08:28.847 11:36:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.847 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.847 ************************************ 00:08:28.847 END TEST odd_alloc 00:08:28.847 ************************************ 00:08:29.108 11:36:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:29.108 11:36:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.108 11:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.108 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:29.108 ************************************ 00:08:29.108 START TEST custom_alloc 00:08:29.108 ************************************ 00:08:29.108 11:36:01 -- common/autotest_common.sh@1114 -- # custom_alloc 00:08:29.108 11:36:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:08:29.108 11:36:01 -- setup/hugepages.sh@169 -- # local node 00:08:29.108 11:36:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:29.108 11:36:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:29.108 11:36:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:29.108 11:36:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:29.108 11:36:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:29.108 11:36:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:29.108 11:36:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:29.108 11:36:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:29.108 11:36:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:29.108 11:36:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:29.108 11:36:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:29.108 11:36:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:29.108 11:36:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:29.108 11:36:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:29.108 11:36:01 -- setup/hugepages.sh@83 -- # : 0 00:08:29.108 11:36:01 -- setup/hugepages.sh@84 -- # : 0 00:08:29.108 11:36:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:29.108 11:36:01 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:29.108 11:36:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:29.108 11:36:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:29.108 11:36:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:29.108 11:36:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:29.108 11:36:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:29.108 11:36:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:29.108 11:36:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:29.108 11:36:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:29.108 11:36:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:29.108 11:36:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:29.108 11:36:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:29.108 11:36:01 -- setup/hugepages.sh@78 -- # return 0 00:08:29.108 11:36:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:29.108 11:36:01 -- setup/hugepages.sh@187 -- # setup output 00:08:29.108 11:36:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:29.108 11:36:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:29.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.630 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:29.630 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:29.630 11:36:02 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:29.630 11:36:02 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:29.630 11:36:02 -- setup/hugepages.sh@89 -- # local node 00:08:29.630 11:36:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:29.630 11:36:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:29.630 11:36:02 -- setup/hugepages.sh@92 -- # local surp 00:08:29.630 11:36:02 -- setup/hugepages.sh@93 -- # local resv 00:08:29.630 11:36:02 -- setup/hugepages.sh@94 -- # local anon 00:08:29.630 11:36:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:29.630 11:36:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:29.630 11:36:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:29.630 11:36:02 -- setup/common.sh@18 -- # local node= 00:08:29.630 11:36:02 -- setup/common.sh@19 -- # local var val 00:08:29.630 11:36:02 -- setup/common.sh@20 -- # local mem_f mem 00:08:29.630 11:36:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:29.630 11:36:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:29.630 11:36:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:29.630 11:36:02 -- setup/common.sh@28 -- # mapfile -t mem 00:08:29.630 11:36:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:29.631 11:36:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9032816 kB' 'MemAvailable: 10543992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498212 kB' 'Inactive: 1345060 kB' 'Active(anon): 129024 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120184 kB' 'Mapped: 51056 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165644 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97896 kB' 'KernelStack: 6512 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.631 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.631 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:29.632 11:36:02 -- setup/common.sh@33 -- # echo 0 00:08:29.632 11:36:02 -- setup/common.sh@33 -- # return 0 00:08:29.632 11:36:02 -- setup/hugepages.sh@97 -- # anon=0 00:08:29.632 11:36:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:29.632 11:36:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:29.632 11:36:02 -- setup/common.sh@18 -- # local node= 00:08:29.632 11:36:02 -- setup/common.sh@19 -- # local var val 00:08:29.632 11:36:02 -- setup/common.sh@20 -- # local mem_f mem 00:08:29.632 11:36:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:29.632 11:36:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:29.632 11:36:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:29.632 11:36:02 -- setup/common.sh@28 -- # mapfile -t mem 00:08:29.632 11:36:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9032816 kB' 'MemAvailable: 10543992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 497832 kB' 'Inactive: 1345060 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165644 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97896 kB' 'KernelStack: 6480 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55452 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.632 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.632 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.633 11:36:02 -- setup/common.sh@33 -- # echo 0 00:08:29.633 11:36:02 -- setup/common.sh@33 -- # return 0 00:08:29.633 11:36:02 -- setup/hugepages.sh@99 -- # surp=0 00:08:29.633 11:36:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:29.633 11:36:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:29.633 11:36:02 -- setup/common.sh@18 -- # local node= 00:08:29.633 11:36:02 -- setup/common.sh@19 -- # local var val 00:08:29.633 11:36:02 -- setup/common.sh@20 -- # local mem_f mem 00:08:29.633 11:36:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:29.633 11:36:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:29.633 11:36:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:29.633 11:36:02 -- setup/common.sh@28 -- # mapfile -t mem 00:08:29.633 11:36:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:29.633 11:36:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9032816 kB' 'MemAvailable: 10543992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498044 kB' 'Inactive: 1345060 kB' 'Active(anon): 128856 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165644 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97896 kB' 'KernelStack: 6464 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55452 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.633 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.633 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.634 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.634 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:29.635 11:36:02 -- setup/common.sh@33 -- # echo 0 00:08:29.635 11:36:02 -- setup/common.sh@33 -- # return 0 00:08:29.635 11:36:02 -- setup/hugepages.sh@100 -- # resv=0 00:08:29.635 11:36:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:29.635 nr_hugepages=512 00:08:29.635 resv_hugepages=0 00:08:29.635 11:36:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:29.635 surplus_hugepages=0 00:08:29.635 11:36:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:29.635 anon_hugepages=0 00:08:29.635 11:36:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:29.635 11:36:02 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:29.635 11:36:02 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:29.635 11:36:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:29.635 11:36:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:29.635 11:36:02 -- setup/common.sh@18 -- # local node= 00:08:29.635 11:36:02 -- setup/common.sh@19 -- # local var val 00:08:29.635 11:36:02 -- setup/common.sh@20 -- # local mem_f mem 00:08:29.635 11:36:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:29.635 11:36:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:29.635 11:36:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:29.635 11:36:02 -- setup/common.sh@28 -- # mapfile -t mem 00:08:29.635 11:36:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:29.635 11:36:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9032816 kB' 'MemAvailable: 10543992 kB' 'Buffers: 2684 kB' 'Cached: 1722048 kB' 'SwapCached: 0 kB' 'Active: 498196 kB' 'Inactive: 1345060 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120172 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 67748 kB' 'Slab: 165644 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97896 kB' 'KernelStack: 6496 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 324176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55452 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.635 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.635 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.636 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.636 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:29.636 11:36:02 -- setup/common.sh@33 -- # echo 512 00:08:29.636 11:36:02 -- setup/common.sh@33 -- # return 0 00:08:29.636 11:36:02 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:29.636 11:36:02 -- setup/hugepages.sh@112 -- # get_nodes 00:08:29.636 11:36:02 -- setup/hugepages.sh@27 -- # local node 00:08:29.636 11:36:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:29.636 11:36:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:29.636 11:36:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:29.636 11:36:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:29.637 11:36:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:29.637 11:36:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:29.637 11:36:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:29.637 11:36:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:29.637 11:36:02 -- setup/common.sh@18 -- # local node=0 00:08:29.637 11:36:02 -- setup/common.sh@19 -- # local var val 00:08:29.637 11:36:02 -- setup/common.sh@20 -- # local mem_f mem 00:08:29.637 11:36:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:29.637 11:36:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:29.637 11:36:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:29.637 11:36:02 -- setup/common.sh@28 -- # mapfile -t mem 00:08:29.637 11:36:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9032816 kB' 'MemUsed: 3206304 kB' 'SwapCached: 0 kB' 'Active: 498208 kB' 'Inactive: 1345060 kB' 'Active(anon): 129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724732 kB' 'Mapped: 50928 kB' 'AnonPages: 120160 kB' 'Shmem: 10484 kB' 'KernelStack: 6496 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67748 kB' 'Slab: 165644 kB' 'SReclaimable: 67748 kB' 'SUnreclaim: 97896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.637 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.637 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # continue 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # IFS=': ' 00:08:29.638 11:36:02 -- setup/common.sh@31 -- # read -r var val _ 00:08:29.638 11:36:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:29.638 11:36:02 -- setup/common.sh@33 -- # echo 0 00:08:29.638 11:36:02 -- setup/common.sh@33 -- # return 0 00:08:29.638 11:36:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:29.638 11:36:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:29.638 11:36:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:29.638 11:36:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:29.638 11:36:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:29.638 node0=512 expecting 512 00:08:29.638 11:36:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:29.638 00:08:29.638 real 0m0.709s 00:08:29.638 user 0m0.334s 00:08:29.638 sys 0m0.417s 00:08:29.638 11:36:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.638 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.638 ************************************ 00:08:29.638 END TEST custom_alloc 00:08:29.638 ************************************ 00:08:29.638 11:36:02 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:29.638 11:36:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.638 11:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.638 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.638 ************************************ 00:08:29.638 START TEST no_shrink_alloc 00:08:29.638 ************************************ 00:08:29.638 11:36:02 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:08:29.638 11:36:02 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:29.638 11:36:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:29.638 11:36:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:29.638 11:36:02 -- setup/hugepages.sh@51 -- # shift 00:08:29.638 11:36:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:29.638 11:36:02 -- setup/hugepages.sh@52 -- # local node_ids 00:08:29.638 11:36:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:29.638 11:36:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:29.638 11:36:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:29.898 11:36:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:29.898 11:36:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:29.898 11:36:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:29.898 11:36:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:29.898 11:36:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:29.898 11:36:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:29.898 11:36:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:29.898 11:36:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:29.898 11:36:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:29.898 11:36:02 -- setup/hugepages.sh@73 -- # return 0 00:08:29.898 11:36:02 -- setup/hugepages.sh@198 -- # setup output 00:08:29.898 11:36:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:29.898 11:36:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:30.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:30.159 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:30.159 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:30.159 11:36:03 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:30.159 11:36:03 -- setup/hugepages.sh@89 -- # local node 00:08:30.159 11:36:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:30.159 11:36:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:30.159 11:36:03 -- setup/hugepages.sh@92 -- # local surp 00:08:30.159 11:36:03 -- setup/hugepages.sh@93 -- # local resv 00:08:30.159 11:36:03 -- setup/hugepages.sh@94 -- # local anon 00:08:30.159 11:36:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:30.159 11:36:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:30.159 11:36:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:30.159 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:30.159 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:30.159 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:30.159 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:30.159 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:30.159 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:30.159 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:30.159 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:30.421 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.421 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983736 kB' 'MemAvailable: 9494908 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495360 kB' 'Inactive: 1345064 kB' 'Active(anon): 126172 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117280 kB' 'Mapped: 50144 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165496 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97764 kB' 'KernelStack: 6368 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55404 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.422 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.422 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:30.423 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:30.423 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:30.423 11:36:03 -- setup/hugepages.sh@97 -- # anon=0 00:08:30.423 11:36:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:30.423 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:30.423 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:30.423 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:30.423 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:30.423 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:30.423 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:30.423 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:30.423 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:30.423 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983736 kB' 'MemAvailable: 9494908 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495288 kB' 'Inactive: 1345064 kB' 'Active(anon): 126100 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117188 kB' 'Mapped: 50032 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165492 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97760 kB' 'KernelStack: 6368 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55388 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.423 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.423 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.424 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:30.424 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:30.424 11:36:03 -- setup/hugepages.sh@99 -- # surp=0 00:08:30.424 11:36:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:30.424 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:30.424 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:30.424 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:30.424 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:30.424 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:30.424 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:30.424 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:30.424 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:30.424 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983892 kB' 'MemAvailable: 9495064 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 494960 kB' 'Inactive: 1345064 kB' 'Active(anon): 125772 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116884 kB' 'Mapped: 50064 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165476 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97744 kB' 'KernelStack: 6352 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55372 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.424 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.424 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.425 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.425 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:30.425 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:30.425 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:30.425 11:36:03 -- setup/hugepages.sh@100 -- # resv=0 00:08:30.425 11:36:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:30.425 nr_hugepages=1024 00:08:30.425 resv_hugepages=0 00:08:30.426 11:36:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:30.426 surplus_hugepages=0 00:08:30.426 11:36:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:30.426 anon_hugepages=0 00:08:30.426 11:36:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:30.426 11:36:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:30.426 11:36:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:30.426 11:36:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:30.426 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:30.426 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:30.426 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:30.426 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:30.426 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:30.426 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:30.426 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:30.426 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:30.426 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983892 kB' 'MemAvailable: 9495064 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495164 kB' 'Inactive: 1345064 kB' 'Active(anon): 125976 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117088 kB' 'Mapped: 50064 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165472 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97740 kB' 'KernelStack: 6336 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55388 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.426 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.426 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:30.427 11:36:03 -- setup/common.sh@33 -- # echo 1024 00:08:30.427 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:30.427 11:36:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:30.427 11:36:03 -- setup/hugepages.sh@112 -- # get_nodes 00:08:30.427 11:36:03 -- setup/hugepages.sh@27 -- # local node 00:08:30.427 11:36:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:30.427 11:36:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:30.427 11:36:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:30.427 11:36:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:30.427 11:36:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:30.427 11:36:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:30.427 11:36:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:30.427 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:30.427 11:36:03 -- setup/common.sh@18 -- # local node=0 00:08:30.427 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:30.427 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:30.427 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:30.427 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:30.427 11:36:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:30.427 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:30.427 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983892 kB' 'MemUsed: 4255228 kB' 'SwapCached: 0 kB' 'Active: 495116 kB' 'Inactive: 1345064 kB' 'Active(anon): 125928 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724736 kB' 'Mapped: 50064 kB' 'AnonPages: 117016 kB' 'Shmem: 10484 kB' 'KernelStack: 6336 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67732 kB' 'Slab: 165472 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.427 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.427 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # continue 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:30.428 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:30.428 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:30.428 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:30.428 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:30.428 11:36:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:30.428 11:36:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:30.428 11:36:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:30.428 11:36:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:30.428 node0=1024 expecting 1024 00:08:30.428 11:36:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:30.428 11:36:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:30.428 11:36:03 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:30.428 11:36:03 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:30.428 11:36:03 -- setup/hugepages.sh@202 -- # setup output 00:08:30.428 11:36:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:30.428 11:36:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:31.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:31.003 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:31.003 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:31.003 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:31.003 11:36:03 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:31.003 11:36:03 -- setup/hugepages.sh@89 -- # local node 00:08:31.003 11:36:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:31.003 11:36:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:31.003 11:36:03 -- setup/hugepages.sh@92 -- # local surp 00:08:31.003 11:36:03 -- setup/hugepages.sh@93 -- # local resv 00:08:31.003 11:36:03 -- setup/hugepages.sh@94 -- # local anon 00:08:31.003 11:36:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:31.003 11:36:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:31.003 11:36:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:31.003 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:31.003 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:31.003 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:31.003 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:31.003 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:31.003 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:31.003 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:31.003 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:31.003 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.003 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983520 kB' 'MemAvailable: 9494692 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495556 kB' 'Inactive: 1345064 kB' 'Active(anon): 126368 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117488 kB' 'Mapped: 50136 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165412 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97680 kB' 'KernelStack: 6360 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55404 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:31.003 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.003 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.003 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.003 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.003 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.003 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.003 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.003 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.003 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.004 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.004 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:31.004 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:31.004 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:31.004 11:36:03 -- setup/hugepages.sh@97 -- # anon=0 00:08:31.004 11:36:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:31.004 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:31.004 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:31.004 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:31.004 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:31.004 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:31.004 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:31.004 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:31.005 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:31.005 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:31.005 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983700 kB' 'MemAvailable: 9494872 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495120 kB' 'Inactive: 1345064 kB' 'Active(anon): 125932 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117020 kB' 'Mapped: 50064 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165416 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97684 kB' 'KernelStack: 6336 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55388 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.005 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.005 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.006 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:31.006 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:31.006 11:36:03 -- setup/hugepages.sh@99 -- # surp=0 00:08:31.006 11:36:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:31.006 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:31.006 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:31.006 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:31.006 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:31.006 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:31.006 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:31.006 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:31.006 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:31.006 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983700 kB' 'MemAvailable: 9494872 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495272 kB' 'Inactive: 1345064 kB' 'Active(anon): 126084 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117172 kB' 'Mapped: 50064 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165416 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97684 kB' 'KernelStack: 6336 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55388 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.006 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.006 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.007 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.007 11:36:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:31.007 11:36:03 -- setup/common.sh@33 -- # echo 0 00:08:31.007 11:36:03 -- setup/common.sh@33 -- # return 0 00:08:31.007 11:36:03 -- setup/hugepages.sh@100 -- # resv=0 00:08:31.007 nr_hugepages=1024 00:08:31.007 11:36:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:31.007 resv_hugepages=0 00:08:31.007 11:36:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:31.007 surplus_hugepages=0 00:08:31.007 11:36:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:31.007 anon_hugepages=0 00:08:31.007 11:36:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:31.007 11:36:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:31.007 11:36:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:31.007 11:36:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:31.007 11:36:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:31.007 11:36:03 -- setup/common.sh@18 -- # local node= 00:08:31.007 11:36:03 -- setup/common.sh@19 -- # local var val 00:08:31.007 11:36:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:31.007 11:36:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:31.007 11:36:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:31.007 11:36:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:31.007 11:36:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:31.007 11:36:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:31.008 11:36:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983700 kB' 'MemAvailable: 9494872 kB' 'Buffers: 2684 kB' 'Cached: 1722052 kB' 'SwapCached: 0 kB' 'Active: 495408 kB' 'Inactive: 1345064 kB' 'Active(anon): 126220 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117380 kB' 'Mapped: 50064 kB' 'Shmem: 10484 kB' 'KReclaimable: 67732 kB' 'Slab: 165416 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97684 kB' 'KernelStack: 6384 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 307692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55388 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.008 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.008 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:31.009 11:36:04 -- setup/common.sh@33 -- # echo 1024 00:08:31.009 11:36:04 -- setup/common.sh@33 -- # return 0 00:08:31.009 11:36:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:31.009 11:36:04 -- setup/hugepages.sh@112 -- # get_nodes 00:08:31.009 11:36:04 -- setup/hugepages.sh@27 -- # local node 00:08:31.009 11:36:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:31.009 11:36:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:31.009 11:36:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:31.009 11:36:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:31.009 11:36:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:31.009 11:36:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:31.009 11:36:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:31.009 11:36:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:31.009 11:36:04 -- setup/common.sh@18 -- # local node=0 00:08:31.009 11:36:04 -- setup/common.sh@19 -- # local var val 00:08:31.009 11:36:04 -- setup/common.sh@20 -- # local mem_f mem 00:08:31.009 11:36:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:31.009 11:36:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:31.009 11:36:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:31.009 11:36:04 -- setup/common.sh@28 -- # mapfile -t mem 00:08:31.009 11:36:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7983952 kB' 'MemUsed: 4255168 kB' 'SwapCached: 0 kB' 'Active: 495240 kB' 'Inactive: 1345064 kB' 'Active(anon): 126052 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724736 kB' 'Mapped: 50064 kB' 'AnonPages: 116980 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67732 kB' 'Slab: 165404 kB' 'SReclaimable: 67732 kB' 'SUnreclaim: 97672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.009 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.009 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # continue 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:31.010 11:36:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:31.010 11:36:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:31.010 11:36:04 -- setup/common.sh@33 -- # echo 0 00:08:31.010 11:36:04 -- setup/common.sh@33 -- # return 0 00:08:31.010 11:36:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:31.010 11:36:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:31.010 11:36:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:31.010 11:36:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:31.010 node0=1024 expecting 1024 00:08:31.010 11:36:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:31.010 11:36:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:31.010 00:08:31.010 real 0m1.355s 00:08:31.010 user 0m0.593s 00:08:31.010 sys 0m0.847s 00:08:31.010 11:36:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.010 11:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.010 ************************************ 00:08:31.010 END TEST no_shrink_alloc 00:08:31.010 ************************************ 00:08:31.287 11:36:04 -- setup/hugepages.sh@217 -- # clear_hp 00:08:31.287 11:36:04 -- setup/hugepages.sh@37 -- # local node hp 00:08:31.287 11:36:04 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:31.287 11:36:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:31.287 11:36:04 -- setup/hugepages.sh@41 -- # echo 0 00:08:31.287 11:36:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:31.287 11:36:04 -- setup/hugepages.sh@41 -- # echo 0 00:08:31.287 11:36:04 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:31.287 11:36:04 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:31.287 00:08:31.287 real 0m5.750s 00:08:31.287 user 0m2.540s 00:08:31.287 sys 0m3.436s 00:08:31.287 11:36:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.287 11:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.287 ************************************ 00:08:31.287 END TEST hugepages 00:08:31.287 ************************************ 00:08:31.287 11:36:04 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:31.287 11:36:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:31.287 11:36:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.287 11:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.287 ************************************ 00:08:31.287 START TEST driver 00:08:31.287 ************************************ 00:08:31.287 11:36:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:31.287 * Looking for test storage... 00:08:31.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:31.287 11:36:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:31.287 11:36:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:31.287 11:36:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:31.546 11:36:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:31.546 11:36:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:31.546 11:36:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:31.546 11:36:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:31.546 11:36:04 -- scripts/common.sh@335 -- # IFS=.-: 00:08:31.546 11:36:04 -- scripts/common.sh@335 -- # read -ra ver1 00:08:31.546 11:36:04 -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.546 11:36:04 -- scripts/common.sh@336 -- # read -ra ver2 00:08:31.546 11:36:04 -- scripts/common.sh@337 -- # local 'op=<' 00:08:31.546 11:36:04 -- scripts/common.sh@339 -- # ver1_l=2 00:08:31.546 11:36:04 -- scripts/common.sh@340 -- # ver2_l=1 00:08:31.546 11:36:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:31.546 11:36:04 -- scripts/common.sh@343 -- # case "$op" in 00:08:31.546 11:36:04 -- scripts/common.sh@344 -- # : 1 00:08:31.546 11:36:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:31.546 11:36:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.546 11:36:04 -- scripts/common.sh@364 -- # decimal 1 00:08:31.546 11:36:04 -- scripts/common.sh@352 -- # local d=1 00:08:31.546 11:36:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.546 11:36:04 -- scripts/common.sh@354 -- # echo 1 00:08:31.546 11:36:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:31.546 11:36:04 -- scripts/common.sh@365 -- # decimal 2 00:08:31.546 11:36:04 -- scripts/common.sh@352 -- # local d=2 00:08:31.546 11:36:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.546 11:36:04 -- scripts/common.sh@354 -- # echo 2 00:08:31.546 11:36:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:31.546 11:36:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:31.546 11:36:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:31.546 11:36:04 -- scripts/common.sh@367 -- # return 0 00:08:31.546 11:36:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.546 11:36:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:31.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.546 --rc genhtml_branch_coverage=1 00:08:31.546 --rc genhtml_function_coverage=1 00:08:31.546 --rc genhtml_legend=1 00:08:31.546 --rc geninfo_all_blocks=1 00:08:31.546 --rc geninfo_unexecuted_blocks=1 00:08:31.546 00:08:31.546 ' 00:08:31.546 11:36:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:31.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.546 --rc genhtml_branch_coverage=1 00:08:31.546 --rc genhtml_function_coverage=1 00:08:31.546 --rc genhtml_legend=1 00:08:31.546 --rc geninfo_all_blocks=1 00:08:31.546 --rc geninfo_unexecuted_blocks=1 00:08:31.546 00:08:31.546 ' 00:08:31.546 11:36:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:31.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.546 --rc genhtml_branch_coverage=1 00:08:31.546 --rc genhtml_function_coverage=1 00:08:31.546 --rc genhtml_legend=1 00:08:31.546 --rc geninfo_all_blocks=1 00:08:31.546 --rc geninfo_unexecuted_blocks=1 00:08:31.546 00:08:31.546 ' 00:08:31.546 11:36:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:31.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.546 --rc genhtml_branch_coverage=1 00:08:31.546 --rc genhtml_function_coverage=1 00:08:31.546 --rc genhtml_legend=1 00:08:31.546 --rc geninfo_all_blocks=1 00:08:31.546 --rc geninfo_unexecuted_blocks=1 00:08:31.546 00:08:31.546 ' 00:08:31.546 11:36:04 -- setup/driver.sh@68 -- # setup reset 00:08:31.546 11:36:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:31.546 11:36:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.114 11:36:05 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:32.114 11:36:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.114 11:36:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.114 11:36:05 -- common/autotest_common.sh@10 -- # set +x 00:08:32.114 ************************************ 00:08:32.114 START TEST guess_driver 00:08:32.114 ************************************ 00:08:32.114 11:36:05 -- common/autotest_common.sh@1114 -- # guess_driver 00:08:32.114 11:36:05 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:32.114 11:36:05 -- setup/driver.sh@47 -- # local fail=0 00:08:32.114 11:36:05 -- setup/driver.sh@49 -- # pick_driver 00:08:32.114 11:36:05 -- setup/driver.sh@36 -- # vfio 00:08:32.114 11:36:05 -- setup/driver.sh@21 -- # local iommu_grups 00:08:32.114 11:36:05 -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:32.114 11:36:05 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:32.114 11:36:05 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:32.114 11:36:05 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:32.114 11:36:05 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:08:32.114 11:36:05 -- setup/driver.sh@32 -- # return 1 00:08:32.114 11:36:05 -- setup/driver.sh@38 -- # uio 00:08:32.114 11:36:05 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:08:32.114 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:08:32.114 11:36:05 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:32.114 Looking for driver=uio_pci_generic 00:08:32.114 11:36:05 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:32.114 11:36:05 -- setup/driver.sh@45 -- # setup output config 00:08:32.114 11:36:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:32.114 11:36:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:32.114 11:36:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:33.050 11:36:05 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:33.050 11:36:05 -- setup/driver.sh@58 -- # continue 00:08:33.050 11:36:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:33.050 11:36:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:33.050 11:36:06 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:33.050 11:36:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:33.310 11:36:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:33.310 11:36:06 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:33.310 11:36:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:33.310 11:36:06 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:33.310 11:36:06 -- setup/driver.sh@65 -- # setup reset 00:08:33.310 11:36:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:33.310 11:36:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:33.878 00:08:33.878 real 0m1.796s 00:08:33.878 user 0m0.644s 00:08:33.878 sys 0m1.215s 00:08:33.878 11:36:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.878 11:36:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.878 ************************************ 00:08:33.878 END TEST guess_driver 00:08:33.878 ************************************ 00:08:34.138 00:08:34.138 real 0m2.809s 00:08:34.138 user 0m1.060s 00:08:34.138 sys 0m1.909s 00:08:34.138 11:36:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.138 11:36:06 -- common/autotest_common.sh@10 -- # set +x 00:08:34.138 ************************************ 00:08:34.138 END TEST driver 00:08:34.138 ************************************ 00:08:34.138 11:36:06 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:34.138 11:36:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.138 11:36:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.138 11:36:06 -- common/autotest_common.sh@10 -- # set +x 00:08:34.138 ************************************ 00:08:34.138 START TEST devices 00:08:34.138 ************************************ 00:08:34.138 11:36:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:34.138 * Looking for test storage... 00:08:34.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:34.138 11:36:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.138 11:36:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.138 11:36:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.507 11:36:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.507 11:36:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.507 11:36:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.507 11:36:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.507 11:36:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.507 11:36:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.507 11:36:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.507 11:36:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.507 11:36:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.507 11:36:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.507 11:36:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.507 11:36:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.507 11:36:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.507 11:36:07 -- scripts/common.sh@344 -- # : 1 00:08:34.507 11:36:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.507 11:36:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.507 11:36:07 -- scripts/common.sh@364 -- # decimal 1 00:08:34.507 11:36:07 -- scripts/common.sh@352 -- # local d=1 00:08:34.507 11:36:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.507 11:36:07 -- scripts/common.sh@354 -- # echo 1 00:08:34.507 11:36:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.507 11:36:07 -- scripts/common.sh@365 -- # decimal 2 00:08:34.507 11:36:07 -- scripts/common.sh@352 -- # local d=2 00:08:34.507 11:36:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.507 11:36:07 -- scripts/common.sh@354 -- # echo 2 00:08:34.507 11:36:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.507 11:36:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.507 11:36:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.507 11:36:07 -- scripts/common.sh@367 -- # return 0 00:08:34.507 11:36:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.507 11:36:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.507 --rc genhtml_branch_coverage=1 00:08:34.507 --rc genhtml_function_coverage=1 00:08:34.507 --rc genhtml_legend=1 00:08:34.507 --rc geninfo_all_blocks=1 00:08:34.507 --rc geninfo_unexecuted_blocks=1 00:08:34.507 00:08:34.507 ' 00:08:34.507 11:36:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.507 --rc genhtml_branch_coverage=1 00:08:34.507 --rc genhtml_function_coverage=1 00:08:34.507 --rc genhtml_legend=1 00:08:34.507 --rc geninfo_all_blocks=1 00:08:34.507 --rc geninfo_unexecuted_blocks=1 00:08:34.507 00:08:34.507 ' 00:08:34.507 11:36:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.507 --rc genhtml_branch_coverage=1 00:08:34.507 --rc genhtml_function_coverage=1 00:08:34.507 --rc genhtml_legend=1 00:08:34.507 --rc geninfo_all_blocks=1 00:08:34.507 --rc geninfo_unexecuted_blocks=1 00:08:34.507 00:08:34.507 ' 00:08:34.507 11:36:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.507 --rc genhtml_branch_coverage=1 00:08:34.507 --rc genhtml_function_coverage=1 00:08:34.507 --rc genhtml_legend=1 00:08:34.507 --rc geninfo_all_blocks=1 00:08:34.507 --rc geninfo_unexecuted_blocks=1 00:08:34.507 00:08:34.507 ' 00:08:34.507 11:36:07 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:34.507 11:36:07 -- setup/devices.sh@192 -- # setup reset 00:08:34.507 11:36:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:34.507 11:36:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:35.445 11:36:08 -- setup/devices.sh@194 -- # get_zoned_devs 00:08:35.445 11:36:08 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:35.445 11:36:08 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:35.445 11:36:08 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:35.445 11:36:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:35.445 11:36:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:35.445 11:36:08 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:35.445 11:36:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:35.445 11:36:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:35.445 11:36:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:35.445 11:36:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:35.445 11:36:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:35.445 11:36:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:35.445 11:36:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:35.445 11:36:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:35.445 11:36:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:35.445 11:36:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:35.445 11:36:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:35.445 11:36:08 -- setup/devices.sh@196 -- # blocks=() 00:08:35.445 11:36:08 -- setup/devices.sh@196 -- # declare -a blocks 00:08:35.445 11:36:08 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:35.445 11:36:08 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:35.445 11:36:08 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:35.445 11:36:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:35.445 11:36:08 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:08:35.445 11:36:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:35.445 11:36:08 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:08:35.445 11:36:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:35.445 No valid GPT data, bailing 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # pt= 00:08:35.445 11:36:08 -- scripts/common.sh@394 -- # return 1 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:35.445 11:36:08 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:35.445 11:36:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:35.445 11:36:08 -- setup/common.sh@80 -- # echo 5368709120 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:35.445 11:36:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:35.445 11:36:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:08:35.445 11:36:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:35.445 11:36:08 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:35.445 11:36:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:08:35.445 11:36:08 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:08:35.445 11:36:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:08:35.445 No valid GPT data, bailing 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # pt= 00:08:35.445 11:36:08 -- scripts/common.sh@394 -- # return 1 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:08:35.445 11:36:08 -- setup/common.sh@76 -- # local dev=nvme1n1 00:08:35.445 11:36:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:08:35.445 11:36:08 -- setup/common.sh@80 -- # echo 4294967296 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:35.445 11:36:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:35.445 11:36:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:35.445 11:36:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:35.445 11:36:08 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:35.445 11:36:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:08:35.445 11:36:08 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:08:35.445 11:36:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:08:35.445 No valid GPT data, bailing 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # pt= 00:08:35.445 11:36:08 -- scripts/common.sh@394 -- # return 1 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:08:35.445 11:36:08 -- setup/common.sh@76 -- # local dev=nvme1n2 00:08:35.445 11:36:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:08:35.445 11:36:08 -- setup/common.sh@80 -- # echo 4294967296 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:35.445 11:36:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:35.445 11:36:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:35.445 11:36:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:08:35.445 11:36:08 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:35.445 11:36:08 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:35.445 11:36:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:08:35.445 11:36:08 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:08:35.445 11:36:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:08:35.445 No valid GPT data, bailing 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:35.445 11:36:08 -- scripts/common.sh@393 -- # pt= 00:08:35.445 11:36:08 -- scripts/common.sh@394 -- # return 1 00:08:35.445 11:36:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:08:35.724 11:36:08 -- setup/common.sh@76 -- # local dev=nvme1n3 00:08:35.724 11:36:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:08:35.724 11:36:08 -- setup/common.sh@80 -- # echo 4294967296 00:08:35.724 11:36:08 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:35.724 11:36:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:35.724 11:36:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:35.724 11:36:08 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:08:35.724 11:36:08 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:35.724 11:36:08 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:35.724 11:36:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.724 11:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.724 11:36:08 -- common/autotest_common.sh@10 -- # set +x 00:08:35.724 ************************************ 00:08:35.724 START TEST nvme_mount 00:08:35.724 ************************************ 00:08:35.724 11:36:08 -- common/autotest_common.sh@1114 -- # nvme_mount 00:08:35.724 11:36:08 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:35.724 11:36:08 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:35.724 11:36:08 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:35.724 11:36:08 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:35.724 11:36:08 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:35.724 11:36:08 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:35.724 11:36:08 -- setup/common.sh@40 -- # local part_no=1 00:08:35.724 11:36:08 -- setup/common.sh@41 -- # local size=1073741824 00:08:35.724 11:36:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:35.724 11:36:08 -- setup/common.sh@44 -- # parts=() 00:08:35.724 11:36:08 -- setup/common.sh@44 -- # local parts 00:08:35.724 11:36:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:35.724 11:36:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:35.724 11:36:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:35.724 11:36:08 -- setup/common.sh@46 -- # (( part++ )) 00:08:35.724 11:36:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:35.724 11:36:08 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:35.724 11:36:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:35.724 11:36:08 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:36.660 Creating new GPT entries in memory. 00:08:36.660 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:36.660 other utilities. 00:08:36.660 11:36:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:36.660 11:36:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:36.660 11:36:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:36.660 11:36:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:36.660 11:36:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:38.063 Creating new GPT entries in memory. 00:08:38.063 The operation has completed successfully. 00:08:38.063 11:36:10 -- setup/common.sh@57 -- # (( part++ )) 00:08:38.063 11:36:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:38.063 11:36:10 -- setup/common.sh@62 -- # wait 54055 00:08:38.063 11:36:10 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.063 11:36:10 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:38.063 11:36:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.063 11:36:10 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:38.063 11:36:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:38.063 11:36:10 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.063 11:36:10 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:38.063 11:36:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:38.063 11:36:10 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:38.063 11:36:10 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.063 11:36:10 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:38.063 11:36:10 -- setup/devices.sh@53 -- # local found=0 00:08:38.063 11:36:10 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:38.063 11:36:10 -- setup/devices.sh@56 -- # : 00:08:38.063 11:36:10 -- setup/devices.sh@59 -- # local pci status 00:08:38.063 11:36:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:38.063 11:36:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:38.063 11:36:10 -- setup/devices.sh@47 -- # setup output config 00:08:38.063 11:36:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.063 11:36:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:38.063 11:36:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:38.063 11:36:10 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:38.063 11:36:10 -- setup/devices.sh@63 -- # found=1 00:08:38.063 11:36:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:38.063 11:36:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:38.063 11:36:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:38.321 11:36:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:38.321 11:36:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:38.579 11:36:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:38.579 11:36:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:38.579 11:36:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:38.579 11:36:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:38.579 11:36:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.579 11:36:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:38.579 11:36:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:38.579 11:36:11 -- setup/devices.sh@110 -- # cleanup_nvme 00:08:38.579 11:36:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.579 11:36:11 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.579 11:36:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:38.579 11:36:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:38.579 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:38.579 11:36:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:38.579 11:36:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:38.838 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:38.838 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:38.838 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:38.838 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:38.838 11:36:11 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:38.838 11:36:11 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:38.838 11:36:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:38.838 11:36:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:38.838 11:36:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:39.096 11:36:11 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:39.096 11:36:11 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:39.096 11:36:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:39.096 11:36:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:39.096 11:36:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:39.096 11:36:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:39.096 11:36:11 -- setup/devices.sh@53 -- # local found=0 00:08:39.096 11:36:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:39.096 11:36:11 -- setup/devices.sh@56 -- # : 00:08:39.096 11:36:11 -- setup/devices.sh@59 -- # local pci status 00:08:39.096 11:36:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:39.096 11:36:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:39.096 11:36:11 -- setup/devices.sh@47 -- # setup output config 00:08:39.096 11:36:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:39.096 11:36:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:39.096 11:36:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:39.096 11:36:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:39.096 11:36:12 -- setup/devices.sh@63 -- # found=1 00:08:39.096 11:36:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:39.096 11:36:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:39.096 11:36:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:39.665 11:36:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:39.665 11:36:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:39.665 11:36:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:39.665 11:36:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:39.665 11:36:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:39.665 11:36:12 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:39.665 11:36:12 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:39.665 11:36:12 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:39.665 11:36:12 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:39.665 11:36:12 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:39.665 11:36:12 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:08:39.665 11:36:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:39.665 11:36:12 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:39.665 11:36:12 -- setup/devices.sh@50 -- # local mount_point= 00:08:39.665 11:36:12 -- setup/devices.sh@51 -- # local test_file= 00:08:39.665 11:36:12 -- setup/devices.sh@53 -- # local found=0 00:08:39.665 11:36:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:39.665 11:36:12 -- setup/devices.sh@59 -- # local pci status 00:08:39.665 11:36:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:39.665 11:36:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:39.665 11:36:12 -- setup/devices.sh@47 -- # setup output config 00:08:39.665 11:36:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:39.665 11:36:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:40.235 11:36:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:40.235 11:36:13 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:40.235 11:36:13 -- setup/devices.sh@63 -- # found=1 00:08:40.235 11:36:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:40.235 11:36:13 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:40.235 11:36:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:40.494 11:36:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:40.494 11:36:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:40.494 11:36:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:40.494 11:36:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:40.753 11:36:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:40.753 11:36:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:40.753 11:36:13 -- setup/devices.sh@68 -- # return 0 00:08:40.753 11:36:13 -- setup/devices.sh@128 -- # cleanup_nvme 00:08:40.753 11:36:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:40.753 11:36:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:40.753 11:36:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:40.753 11:36:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:40.753 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:40.753 00:08:40.753 real 0m5.073s 00:08:40.753 user 0m1.091s 00:08:40.753 sys 0m1.516s 00:08:40.753 11:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.753 ************************************ 00:08:40.753 END TEST nvme_mount 00:08:40.753 ************************************ 00:08:40.753 11:36:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.753 11:36:13 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:40.753 11:36:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.753 11:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.753 11:36:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.753 ************************************ 00:08:40.753 START TEST dm_mount 00:08:40.753 ************************************ 00:08:40.753 11:36:13 -- common/autotest_common.sh@1114 -- # dm_mount 00:08:40.753 11:36:13 -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:40.753 11:36:13 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:40.753 11:36:13 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:40.753 11:36:13 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:40.753 11:36:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:40.753 11:36:13 -- setup/common.sh@40 -- # local part_no=2 00:08:40.753 11:36:13 -- setup/common.sh@41 -- # local size=1073741824 00:08:40.753 11:36:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:40.753 11:36:13 -- setup/common.sh@44 -- # parts=() 00:08:40.753 11:36:13 -- setup/common.sh@44 -- # local parts 00:08:40.753 11:36:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:40.753 11:36:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:40.753 11:36:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:40.753 11:36:13 -- setup/common.sh@46 -- # (( part++ )) 00:08:40.753 11:36:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:40.753 11:36:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:40.753 11:36:13 -- setup/common.sh@46 -- # (( part++ )) 00:08:40.753 11:36:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:40.753 11:36:13 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:40.753 11:36:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:40.753 11:36:13 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:41.692 Creating new GPT entries in memory. 00:08:41.692 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:41.692 other utilities. 00:08:41.692 11:36:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:41.692 11:36:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:41.692 11:36:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:41.692 11:36:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:41.692 11:36:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:43.073 Creating new GPT entries in memory. 00:08:43.073 The operation has completed successfully. 00:08:43.073 11:36:15 -- setup/common.sh@57 -- # (( part++ )) 00:08:43.073 11:36:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:43.073 11:36:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:43.073 11:36:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:43.073 11:36:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:43.712 The operation has completed successfully. 00:08:43.712 11:36:16 -- setup/common.sh@57 -- # (( part++ )) 00:08:43.712 11:36:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:43.712 11:36:16 -- setup/common.sh@62 -- # wait 54521 00:08:43.972 11:36:16 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:43.972 11:36:16 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:43.972 11:36:16 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:43.972 11:36:16 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:43.972 11:36:16 -- setup/devices.sh@160 -- # for t in {1..5} 00:08:43.972 11:36:16 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:43.972 11:36:16 -- setup/devices.sh@161 -- # break 00:08:43.972 11:36:16 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:43.972 11:36:16 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:43.972 11:36:16 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:43.972 11:36:16 -- setup/devices.sh@166 -- # dm=dm-0 00:08:43.972 11:36:16 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:43.972 11:36:16 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:43.972 11:36:16 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:43.972 11:36:16 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:43.972 11:36:16 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:43.972 11:36:16 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:43.972 11:36:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:43.972 11:36:16 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:43.972 11:36:16 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:43.972 11:36:16 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:43.972 11:36:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:43.972 11:36:16 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:43.972 11:36:16 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:43.972 11:36:16 -- setup/devices.sh@53 -- # local found=0 00:08:43.972 11:36:16 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:43.972 11:36:16 -- setup/devices.sh@56 -- # : 00:08:43.972 11:36:16 -- setup/devices.sh@59 -- # local pci status 00:08:43.972 11:36:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:43.972 11:36:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:43.972 11:36:16 -- setup/devices.sh@47 -- # setup output config 00:08:43.972 11:36:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:43.972 11:36:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:44.231 11:36:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:44.231 11:36:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:44.231 11:36:17 -- setup/devices.sh@63 -- # found=1 00:08:44.231 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.231 11:36:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:44.231 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.492 11:36:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:44.492 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.752 11:36:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:44.752 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.752 11:36:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:44.752 11:36:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:44.752 11:36:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:44.752 11:36:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:44.752 11:36:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:44.752 11:36:17 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:44.752 11:36:17 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:44.752 11:36:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:44.752 11:36:17 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:44.752 11:36:17 -- setup/devices.sh@50 -- # local mount_point= 00:08:44.752 11:36:17 -- setup/devices.sh@51 -- # local test_file= 00:08:44.752 11:36:17 -- setup/devices.sh@53 -- # local found=0 00:08:44.752 11:36:17 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:44.752 11:36:17 -- setup/devices.sh@59 -- # local pci status 00:08:44.752 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.752 11:36:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:44.752 11:36:17 -- setup/devices.sh@47 -- # setup output config 00:08:44.752 11:36:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.752 11:36:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:45.013 11:36:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.013 11:36:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:45.013 11:36:17 -- setup/devices.sh@63 -- # found=1 00:08:45.013 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.013 11:36:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.013 11:36:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.583 11:36:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.583 11:36:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.583 11:36:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.583 11:36:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.583 11:36:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:45.583 11:36:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:45.583 11:36:18 -- setup/devices.sh@68 -- # return 0 00:08:45.583 11:36:18 -- setup/devices.sh@187 -- # cleanup_dm 00:08:45.583 11:36:18 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:45.583 11:36:18 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:45.583 11:36:18 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:45.583 11:36:18 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:45.583 11:36:18 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:45.583 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:45.583 11:36:18 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:45.583 11:36:18 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:45.583 00:08:45.583 real 0m4.964s 00:08:45.583 user 0m0.739s 00:08:45.583 sys 0m1.155s 00:08:45.583 11:36:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.583 11:36:18 -- common/autotest_common.sh@10 -- # set +x 00:08:45.583 ************************************ 00:08:45.583 END TEST dm_mount 00:08:45.583 ************************************ 00:08:45.842 11:36:18 -- setup/devices.sh@1 -- # cleanup 00:08:45.842 11:36:18 -- setup/devices.sh@11 -- # cleanup_nvme 00:08:45.842 11:36:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.842 11:36:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:45.842 11:36:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:45.842 11:36:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:45.842 11:36:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:46.102 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:46.102 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:46.102 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:46.102 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:46.102 11:36:18 -- setup/devices.sh@12 -- # cleanup_dm 00:08:46.102 11:36:18 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:46.102 11:36:18 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:46.102 11:36:18 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:46.102 11:36:18 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:46.102 11:36:18 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:46.102 11:36:18 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:46.102 00:08:46.102 real 0m11.965s 00:08:46.102 user 0m2.632s 00:08:46.102 sys 0m3.544s 00:08:46.102 11:36:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.102 11:36:18 -- common/autotest_common.sh@10 -- # set +x 00:08:46.102 ************************************ 00:08:46.102 END TEST devices 00:08:46.102 ************************************ 00:08:46.102 00:08:46.102 real 0m25.855s 00:08:46.102 user 0m8.415s 00:08:46.102 sys 0m12.120s 00:08:46.102 11:36:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.102 11:36:19 -- common/autotest_common.sh@10 -- # set +x 00:08:46.102 ************************************ 00:08:46.102 END TEST setup.sh 00:08:46.102 ************************************ 00:08:46.102 11:36:19 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:46.362 Hugepages 00:08:46.362 node hugesize free / total 00:08:46.362 node0 1048576kB 0 / 0 00:08:46.362 node0 2048kB 2048 / 2048 00:08:46.362 00:08:46.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:46.362 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:46.622 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:46.622 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:46.622 11:36:19 -- spdk/autotest.sh@128 -- # uname -s 00:08:46.622 11:36:19 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:08:46.622 11:36:19 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:08:46.622 11:36:19 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:47.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.571 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.571 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.843 11:36:20 -- common/autotest_common.sh@1527 -- # sleep 1 00:08:48.782 11:36:21 -- common/autotest_common.sh@1528 -- # bdfs=() 00:08:48.782 11:36:21 -- common/autotest_common.sh@1528 -- # local bdfs 00:08:48.782 11:36:21 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:08:48.782 11:36:21 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:08:48.782 11:36:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:48.782 11:36:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:48.782 11:36:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:48.782 11:36:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.782 11:36:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:48.782 11:36:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:08:48.782 11:36:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:48.782 11:36:21 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:49.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.351 Waiting for block devices as requested 00:08:49.351 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.351 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.611 11:36:22 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:08:49.611 11:36:22 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:49.611 11:36:22 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:08:49.611 11:36:22 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:08:49.611 11:36:22 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:08:49.611 11:36:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1540 -- # grep oacs 00:08:49.611 11:36:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:49.611 11:36:22 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:08:49.611 11:36:22 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:08:49.611 11:36:22 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:08:49.611 11:36:22 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:08:49.611 11:36:22 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:08:49.611 11:36:22 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:08:49.611 11:36:22 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:08:49.611 11:36:22 -- common/autotest_common.sh@1552 -- # continue 00:08:49.611 11:36:22 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:08:49.611 11:36:22 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:08:49.611 11:36:22 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:49.611 11:36:22 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:08:49.611 11:36:22 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:08:49.611 11:36:22 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:08:49.612 11:36:22 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:08:49.612 11:36:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:08:49.612 11:36:22 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:08:49.612 11:36:22 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:08:49.612 11:36:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:49.612 11:36:22 -- common/autotest_common.sh@1540 -- # grep oacs 00:08:49.612 11:36:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:49.612 11:36:22 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:08:49.612 11:36:22 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:08:49.612 11:36:22 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:08:49.612 11:36:22 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:08:49.612 11:36:22 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:08:49.612 11:36:22 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:08:49.612 11:36:22 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:08:49.612 11:36:22 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:08:49.612 11:36:22 -- common/autotest_common.sh@1552 -- # continue 00:08:49.612 11:36:22 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:08:49.612 11:36:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.612 11:36:22 -- common/autotest_common.sh@10 -- # set +x 00:08:49.612 11:36:22 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:08:49.612 11:36:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.612 11:36:22 -- common/autotest_common.sh@10 -- # set +x 00:08:49.612 11:36:22 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:50.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.550 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.550 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.550 11:36:23 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:08:50.550 11:36:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.550 11:36:23 -- common/autotest_common.sh@10 -- # set +x 00:08:50.810 11:36:23 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:08:50.810 11:36:23 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:08:50.810 11:36:23 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:08:50.810 11:36:23 -- common/autotest_common.sh@1572 -- # bdfs=() 00:08:50.810 11:36:23 -- common/autotest_common.sh@1572 -- # local bdfs 00:08:50.810 11:36:23 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:08:50.810 11:36:23 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:50.810 11:36:23 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:50.810 11:36:23 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.810 11:36:23 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.810 11:36:23 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:50.810 11:36:23 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:08:50.810 11:36:23 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:50.810 11:36:23 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:08:50.810 11:36:23 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:08:50.810 11:36:23 -- common/autotest_common.sh@1575 -- # device=0x0010 00:08:50.810 11:36:23 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:50.810 11:36:23 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:08:50.810 11:36:23 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:08:50.810 11:36:23 -- common/autotest_common.sh@1575 -- # device=0x0010 00:08:50.810 11:36:23 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:50.810 11:36:23 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:08:50.810 11:36:23 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:08:50.810 11:36:23 -- common/autotest_common.sh@1588 -- # return 0 00:08:50.810 11:36:23 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:08:50.810 11:36:23 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:08:50.810 11:36:23 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:50.810 11:36:23 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:50.810 11:36:23 -- spdk/autotest.sh@160 -- # timing_enter lib 00:08:50.810 11:36:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.810 11:36:23 -- common/autotest_common.sh@10 -- # set +x 00:08:50.810 11:36:23 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:50.810 11:36:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.810 11:36:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.810 11:36:23 -- common/autotest_common.sh@10 -- # set +x 00:08:50.810 ************************************ 00:08:50.810 START TEST env 00:08:50.810 ************************************ 00:08:50.810 11:36:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:51.069 * Looking for test storage... 00:08:51.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:51.069 11:36:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:51.069 11:36:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:51.069 11:36:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:51.069 11:36:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:51.069 11:36:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:51.069 11:36:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:51.069 11:36:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:51.069 11:36:23 -- scripts/common.sh@335 -- # IFS=.-: 00:08:51.069 11:36:23 -- scripts/common.sh@335 -- # read -ra ver1 00:08:51.069 11:36:23 -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.069 11:36:23 -- scripts/common.sh@336 -- # read -ra ver2 00:08:51.069 11:36:23 -- scripts/common.sh@337 -- # local 'op=<' 00:08:51.069 11:36:23 -- scripts/common.sh@339 -- # ver1_l=2 00:08:51.069 11:36:23 -- scripts/common.sh@340 -- # ver2_l=1 00:08:51.069 11:36:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:51.069 11:36:23 -- scripts/common.sh@343 -- # case "$op" in 00:08:51.069 11:36:23 -- scripts/common.sh@344 -- # : 1 00:08:51.069 11:36:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:51.069 11:36:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.069 11:36:23 -- scripts/common.sh@364 -- # decimal 1 00:08:51.069 11:36:23 -- scripts/common.sh@352 -- # local d=1 00:08:51.069 11:36:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.069 11:36:23 -- scripts/common.sh@354 -- # echo 1 00:08:51.069 11:36:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:51.069 11:36:23 -- scripts/common.sh@365 -- # decimal 2 00:08:51.069 11:36:23 -- scripts/common.sh@352 -- # local d=2 00:08:51.069 11:36:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.069 11:36:23 -- scripts/common.sh@354 -- # echo 2 00:08:51.070 11:36:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:51.070 11:36:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:51.070 11:36:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:51.070 11:36:23 -- scripts/common.sh@367 -- # return 0 00:08:51.070 11:36:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.070 11:36:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.070 --rc genhtml_branch_coverage=1 00:08:51.070 --rc genhtml_function_coverage=1 00:08:51.070 --rc genhtml_legend=1 00:08:51.070 --rc geninfo_all_blocks=1 00:08:51.070 --rc geninfo_unexecuted_blocks=1 00:08:51.070 00:08:51.070 ' 00:08:51.070 11:36:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.070 --rc genhtml_branch_coverage=1 00:08:51.070 --rc genhtml_function_coverage=1 00:08:51.070 --rc genhtml_legend=1 00:08:51.070 --rc geninfo_all_blocks=1 00:08:51.070 --rc geninfo_unexecuted_blocks=1 00:08:51.070 00:08:51.070 ' 00:08:51.070 11:36:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.070 --rc genhtml_branch_coverage=1 00:08:51.070 --rc genhtml_function_coverage=1 00:08:51.070 --rc genhtml_legend=1 00:08:51.070 --rc geninfo_all_blocks=1 00:08:51.070 --rc geninfo_unexecuted_blocks=1 00:08:51.070 00:08:51.070 ' 00:08:51.070 11:36:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.070 --rc genhtml_branch_coverage=1 00:08:51.070 --rc genhtml_function_coverage=1 00:08:51.070 --rc genhtml_legend=1 00:08:51.070 --rc geninfo_all_blocks=1 00:08:51.070 --rc geninfo_unexecuted_blocks=1 00:08:51.070 00:08:51.070 ' 00:08:51.070 11:36:23 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:51.070 11:36:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.070 11:36:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.070 11:36:23 -- common/autotest_common.sh@10 -- # set +x 00:08:51.070 ************************************ 00:08:51.070 START TEST env_memory 00:08:51.070 ************************************ 00:08:51.070 11:36:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:51.070 00:08:51.070 00:08:51.070 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.070 http://cunit.sourceforge.net/ 00:08:51.070 00:08:51.070 00:08:51.070 Suite: memory 00:08:51.070 Test: alloc and free memory map ...[2024-11-20 11:36:24.027191] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:51.070 passed 00:08:51.070 Test: mem map translation ...[2024-11-20 11:36:24.048513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:51.070 [2024-11-20 11:36:24.048586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:51.070 [2024-11-20 11:36:24.048631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:51.070 [2024-11-20 11:36:24.048639] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:51.070 passed 00:08:51.070 Test: mem map registration ...[2024-11-20 11:36:24.089605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:51.070 [2024-11-20 11:36:24.089662] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:51.070 passed 00:08:51.330 Test: mem map adjacent registrations ...passed 00:08:51.330 00:08:51.330 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.330 suites 1 1 n/a 0 0 00:08:51.330 tests 4 4 4 0 0 00:08:51.330 asserts 152 152 152 0 n/a 00:08:51.330 00:08:51.330 Elapsed time = 0.146 seconds 00:08:51.330 00:08:51.330 real 0m0.172s 00:08:51.330 user 0m0.148s 00:08:51.330 sys 0m0.020s 00:08:51.330 11:36:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.330 11:36:24 -- common/autotest_common.sh@10 -- # set +x 00:08:51.330 ************************************ 00:08:51.330 END TEST env_memory 00:08:51.330 ************************************ 00:08:51.330 11:36:24 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:51.330 11:36:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.330 11:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.330 11:36:24 -- common/autotest_common.sh@10 -- # set +x 00:08:51.330 ************************************ 00:08:51.330 START TEST env_vtophys 00:08:51.330 ************************************ 00:08:51.330 11:36:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:51.330 EAL: lib.eal log level changed from notice to debug 00:08:51.330 EAL: Detected lcore 0 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 1 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 2 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 3 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 4 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 5 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 6 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 7 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 8 as core 0 on socket 0 00:08:51.330 EAL: Detected lcore 9 as core 0 on socket 0 00:08:51.330 EAL: Maximum logical cores by configuration: 128 00:08:51.330 EAL: Detected CPU lcores: 10 00:08:51.330 EAL: Detected NUMA nodes: 1 00:08:51.330 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:51.330 EAL: Detected shared linkage of DPDK 00:08:51.330 EAL: No shared files mode enabled, IPC will be disabled 00:08:51.330 EAL: Selected IOVA mode 'PA' 00:08:51.330 EAL: Probing VFIO support... 00:08:51.330 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:51.330 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:51.330 EAL: Ask a virtual area of 0x2e000 bytes 00:08:51.330 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:51.330 EAL: Setting up physically contiguous memory... 00:08:51.330 EAL: Setting maximum number of open files to 524288 00:08:51.330 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:51.330 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:51.330 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.330 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:51.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.330 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.330 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:51.330 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:51.330 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.330 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:51.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.330 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.330 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:51.330 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:51.330 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.330 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:51.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.330 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.330 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:51.330 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:51.330 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.330 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:51.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.330 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.330 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:51.330 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:51.330 EAL: Hugepages will be freed exactly as allocated. 00:08:51.330 EAL: No shared files mode enabled, IPC is disabled 00:08:51.330 EAL: No shared files mode enabled, IPC is disabled 00:08:51.330 EAL: TSC frequency is ~2290000 KHz 00:08:51.330 EAL: Main lcore 0 is ready (tid=7fe4143fba00;cpuset=[0]) 00:08:51.330 EAL: Trying to obtain current memory policy. 00:08:51.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.330 EAL: Restoring previous memory policy: 0 00:08:51.330 EAL: request: mp_malloc_sync 00:08:51.330 EAL: No shared files mode enabled, IPC is disabled 00:08:51.330 EAL: Heap on socket 0 was expanded by 2MB 00:08:51.330 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:51.330 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:51.330 EAL: Mem event callback 'spdk:(nil)' registered 00:08:51.330 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:51.589 00:08:51.589 00:08:51.589 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.589 http://cunit.sourceforge.net/ 00:08:51.589 00:08:51.589 00:08:51.589 Suite: components_suite 00:08:51.589 Test: vtophys_malloc_test ...passed 00:08:51.589 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 4MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 4MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 6MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 6MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 10MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 10MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 18MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 18MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 34MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 34MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 66MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 66MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 130MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was shrunk by 130MB 00:08:51.589 EAL: Trying to obtain current memory policy. 00:08:51.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.589 EAL: Restoring previous memory policy: 4 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.589 EAL: request: mp_malloc_sync 00:08:51.589 EAL: No shared files mode enabled, IPC is disabled 00:08:51.589 EAL: Heap on socket 0 was expanded by 258MB 00:08:51.589 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.848 EAL: request: mp_malloc_sync 00:08:51.848 EAL: No shared files mode enabled, IPC is disabled 00:08:51.848 EAL: Heap on socket 0 was shrunk by 258MB 00:08:51.848 EAL: Trying to obtain current memory policy. 00:08:51.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.848 EAL: Restoring previous memory policy: 4 00:08:51.848 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.848 EAL: request: mp_malloc_sync 00:08:51.848 EAL: No shared files mode enabled, IPC is disabled 00:08:51.848 EAL: Heap on socket 0 was expanded by 514MB 00:08:51.848 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.111 EAL: request: mp_malloc_sync 00:08:52.111 EAL: No shared files mode enabled, IPC is disabled 00:08:52.111 EAL: Heap on socket 0 was shrunk by 514MB 00:08:52.111 EAL: Trying to obtain current memory policy. 00:08:52.111 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.111 EAL: Restoring previous memory policy: 4 00:08:52.111 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.111 EAL: request: mp_malloc_sync 00:08:52.111 EAL: No shared files mode enabled, IPC is disabled 00:08:52.111 EAL: Heap on socket 0 was expanded by 1026MB 00:08:52.371 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.631 passed 00:08:52.631 00:08:52.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.631 suites 1 1 n/a 0 0 00:08:52.631 tests 2 2 2 0 0 00:08:52.631 asserts 5330 5330 5330 0 n/a 00:08:52.631 00:08:52.631 Elapsed time = 1.022 seconds 00:08:52.631 EAL: request: mp_malloc_sync 00:08:52.631 EAL: No shared files mode enabled, IPC is disabled 00:08:52.631 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:52.631 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.631 EAL: request: mp_malloc_sync 00:08:52.631 EAL: No shared files mode enabled, IPC is disabled 00:08:52.631 EAL: Heap on socket 0 was shrunk by 2MB 00:08:52.631 EAL: No shared files mode enabled, IPC is disabled 00:08:52.631 EAL: No shared files mode enabled, IPC is disabled 00:08:52.631 EAL: No shared files mode enabled, IPC is disabled 00:08:52.631 00:08:52.631 real 0m1.220s 00:08:52.631 user 0m0.647s 00:08:52.631 sys 0m0.446s 00:08:52.631 11:36:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.631 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.631 ************************************ 00:08:52.631 END TEST env_vtophys 00:08:52.631 ************************************ 00:08:52.631 11:36:25 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:52.631 11:36:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:52.631 11:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.631 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.631 ************************************ 00:08:52.631 START TEST env_pci 00:08:52.631 ************************************ 00:08:52.631 11:36:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:52.631 00:08:52.631 00:08:52.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.631 http://cunit.sourceforge.net/ 00:08:52.631 00:08:52.631 00:08:52.631 Suite: pci 00:08:52.631 Test: pci_hook ...[2024-11-20 11:36:25.518833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55665 has claimed it 00:08:52.631 passed 00:08:52.631 00:08:52.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.631 suites 1 1 n/a 0 0 00:08:52.631 tests 1 1 1 0 0 00:08:52.631 asserts 25 25 25 0 n/a 00:08:52.631 00:08:52.631 Elapsed time = 0.002 seconds 00:08:52.631 EAL: Cannot find device (10000:00:01.0) 00:08:52.631 EAL: Failed to attach device on primary process 00:08:52.631 00:08:52.631 real 0m0.028s 00:08:52.631 user 0m0.011s 00:08:52.631 sys 0m0.016s 00:08:52.631 11:36:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.631 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.631 ************************************ 00:08:52.631 END TEST env_pci 00:08:52.631 ************************************ 00:08:52.631 11:36:25 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:52.631 11:36:25 -- env/env.sh@15 -- # uname 00:08:52.631 11:36:25 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:52.631 11:36:25 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:52.631 11:36:25 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:52.631 11:36:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:52.631 11:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.631 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.631 ************************************ 00:08:52.631 START TEST env_dpdk_post_init 00:08:52.631 ************************************ 00:08:52.631 11:36:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:52.631 EAL: Detected CPU lcores: 10 00:08:52.631 EAL: Detected NUMA nodes: 1 00:08:52.631 EAL: Detected shared linkage of DPDK 00:08:52.631 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:52.631 EAL: Selected IOVA mode 'PA' 00:08:52.891 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:52.891 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:52.891 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:08:52.891 Starting DPDK initialization... 00:08:52.891 Starting SPDK post initialization... 00:08:52.891 SPDK NVMe probe 00:08:52.891 Attaching to 0000:00:06.0 00:08:52.891 Attaching to 0000:00:07.0 00:08:52.891 Attached to 0000:00:06.0 00:08:52.891 Attached to 0000:00:07.0 00:08:52.891 Cleaning up... 00:08:52.891 00:08:52.891 real 0m0.186s 00:08:52.891 user 0m0.048s 00:08:52.891 sys 0m0.039s 00:08:52.891 11:36:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.891 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.891 ************************************ 00:08:52.891 END TEST env_dpdk_post_init 00:08:52.891 ************************************ 00:08:52.891 11:36:25 -- env/env.sh@26 -- # uname 00:08:52.891 11:36:25 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:52.891 11:36:25 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:52.891 11:36:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:52.891 11:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.891 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.892 ************************************ 00:08:52.892 START TEST env_mem_callbacks 00:08:52.892 ************************************ 00:08:52.892 11:36:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:52.892 EAL: Detected CPU lcores: 10 00:08:52.892 EAL: Detected NUMA nodes: 1 00:08:52.892 EAL: Detected shared linkage of DPDK 00:08:52.892 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:52.892 EAL: Selected IOVA mode 'PA' 00:08:53.151 00:08:53.151 00:08:53.152 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.152 http://cunit.sourceforge.net/ 00:08:53.152 00:08:53.152 00:08:53.152 Suite: memory 00:08:53.152 Test: test ... 00:08:53.152 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:53.152 register 0x200000200000 2097152 00:08:53.152 malloc 3145728 00:08:53.152 register 0x200000400000 4194304 00:08:53.152 buf 0x200000500000 len 3145728 PASSED 00:08:53.152 malloc 64 00:08:53.152 buf 0x2000004fff40 len 64 PASSED 00:08:53.152 malloc 4194304 00:08:53.152 register 0x200000800000 6291456 00:08:53.152 buf 0x200000a00000 len 4194304 PASSED 00:08:53.152 free 0x200000500000 3145728 00:08:53.152 free 0x2000004fff40 64 00:08:53.152 unregister 0x200000400000 4194304 PASSED 00:08:53.152 free 0x200000a00000 4194304 00:08:53.152 unregister 0x200000800000 6291456 PASSED 00:08:53.152 malloc 8388608 00:08:53.152 register 0x200000400000 10485760 00:08:53.152 buf 0x200000600000 len 8388608 PASSED 00:08:53.152 free 0x200000600000 8388608 00:08:53.152 unregister 0x200000400000 10485760 PASSED 00:08:53.152 passed 00:08:53.152 00:08:53.152 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.152 suites 1 1 n/a 0 0 00:08:53.152 tests 1 1 1 0 0 00:08:53.152 asserts 15 15 15 0 n/a 00:08:53.152 00:08:53.152 Elapsed time = 0.010 seconds 00:08:53.152 00:08:53.152 real 0m0.152s 00:08:53.152 user 0m0.021s 00:08:53.152 sys 0m0.028s 00:08:53.152 11:36:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.152 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 ************************************ 00:08:53.152 END TEST env_mem_callbacks 00:08:53.152 ************************************ 00:08:53.152 00:08:53.152 real 0m2.302s 00:08:53.152 user 0m1.107s 00:08:53.152 sys 0m0.883s 00:08:53.152 11:36:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.152 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 ************************************ 00:08:53.152 END TEST env 00:08:53.152 ************************************ 00:08:53.152 11:36:26 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:53.152 11:36:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:53.152 11:36:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.152 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 ************************************ 00:08:53.152 START TEST rpc 00:08:53.152 ************************************ 00:08:53.152 11:36:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:53.412 * Looking for test storage... 00:08:53.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:53.412 11:36:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:53.412 11:36:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:53.412 11:36:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:53.412 11:36:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:53.412 11:36:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:53.412 11:36:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:53.412 11:36:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:53.412 11:36:26 -- scripts/common.sh@335 -- # IFS=.-: 00:08:53.412 11:36:26 -- scripts/common.sh@335 -- # read -ra ver1 00:08:53.412 11:36:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.412 11:36:26 -- scripts/common.sh@336 -- # read -ra ver2 00:08:53.412 11:36:26 -- scripts/common.sh@337 -- # local 'op=<' 00:08:53.412 11:36:26 -- scripts/common.sh@339 -- # ver1_l=2 00:08:53.412 11:36:26 -- scripts/common.sh@340 -- # ver2_l=1 00:08:53.412 11:36:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:53.412 11:36:26 -- scripts/common.sh@343 -- # case "$op" in 00:08:53.412 11:36:26 -- scripts/common.sh@344 -- # : 1 00:08:53.412 11:36:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:53.412 11:36:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.412 11:36:26 -- scripts/common.sh@364 -- # decimal 1 00:08:53.412 11:36:26 -- scripts/common.sh@352 -- # local d=1 00:08:53.412 11:36:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.413 11:36:26 -- scripts/common.sh@354 -- # echo 1 00:08:53.413 11:36:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:53.413 11:36:26 -- scripts/common.sh@365 -- # decimal 2 00:08:53.413 11:36:26 -- scripts/common.sh@352 -- # local d=2 00:08:53.413 11:36:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.413 11:36:26 -- scripts/common.sh@354 -- # echo 2 00:08:53.413 11:36:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:53.413 11:36:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:53.413 11:36:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:53.413 11:36:26 -- scripts/common.sh@367 -- # return 0 00:08:53.413 11:36:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.413 11:36:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.413 --rc genhtml_branch_coverage=1 00:08:53.413 --rc genhtml_function_coverage=1 00:08:53.413 --rc genhtml_legend=1 00:08:53.413 --rc geninfo_all_blocks=1 00:08:53.413 --rc geninfo_unexecuted_blocks=1 00:08:53.413 00:08:53.413 ' 00:08:53.413 11:36:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.413 --rc genhtml_branch_coverage=1 00:08:53.413 --rc genhtml_function_coverage=1 00:08:53.413 --rc genhtml_legend=1 00:08:53.413 --rc geninfo_all_blocks=1 00:08:53.413 --rc geninfo_unexecuted_blocks=1 00:08:53.413 00:08:53.413 ' 00:08:53.413 11:36:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.413 --rc genhtml_branch_coverage=1 00:08:53.413 --rc genhtml_function_coverage=1 00:08:53.413 --rc genhtml_legend=1 00:08:53.413 --rc geninfo_all_blocks=1 00:08:53.413 --rc geninfo_unexecuted_blocks=1 00:08:53.413 00:08:53.413 ' 00:08:53.413 11:36:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.413 --rc genhtml_branch_coverage=1 00:08:53.413 --rc genhtml_function_coverage=1 00:08:53.413 --rc genhtml_legend=1 00:08:53.413 --rc geninfo_all_blocks=1 00:08:53.413 --rc geninfo_unexecuted_blocks=1 00:08:53.413 00:08:53.413 ' 00:08:53.413 11:36:26 -- rpc/rpc.sh@65 -- # spdk_pid=55787 00:08:53.413 11:36:26 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:53.413 11:36:26 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:53.413 11:36:26 -- rpc/rpc.sh@67 -- # waitforlisten 55787 00:08:53.413 11:36:26 -- common/autotest_common.sh@829 -- # '[' -z 55787 ']' 00:08:53.413 11:36:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.413 11:36:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.413 11:36:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.413 11:36:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.413 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:08:53.413 [2024-11-20 11:36:26.395147] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.413 [2024-11-20 11:36:26.395588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55787 ] 00:08:53.672 [2024-11-20 11:36:26.534577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.672 [2024-11-20 11:36:26.631520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.672 [2024-11-20 11:36:26.631648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:53.672 [2024-11-20 11:36:26.631664] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55787' to capture a snapshot of events at runtime. 00:08:53.672 [2024-11-20 11:36:26.631670] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55787 for offline analysis/debug. 00:08:53.672 [2024-11-20 11:36:26.631695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.243 11:36:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.243 11:36:27 -- common/autotest_common.sh@862 -- # return 0 00:08:54.243 11:36:27 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:54.243 11:36:27 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:54.243 11:36:27 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:54.243 11:36:27 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:54.243 11:36:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.243 11:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.243 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.243 ************************************ 00:08:54.243 START TEST rpc_integrity 00:08:54.243 ************************************ 00:08:54.243 11:36:27 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:08:54.243 11:36:27 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:54.243 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.243 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.503 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.503 11:36:27 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:54.503 11:36:27 -- rpc/rpc.sh@13 -- # jq length 00:08:54.503 11:36:27 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:54.503 11:36:27 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:54.503 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.503 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.503 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.503 11:36:27 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:54.503 11:36:27 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:54.503 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.503 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.503 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.503 11:36:27 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:54.503 { 00:08:54.503 "aliases": [ 00:08:54.503 "be0828af-2727-4544-92c2-8f82a330b716" 00:08:54.503 ], 00:08:54.503 "assigned_rate_limits": { 00:08:54.503 "r_mbytes_per_sec": 0, 00:08:54.503 "rw_ios_per_sec": 0, 00:08:54.503 "rw_mbytes_per_sec": 0, 00:08:54.503 "w_mbytes_per_sec": 0 00:08:54.503 }, 00:08:54.503 "block_size": 512, 00:08:54.503 "claimed": false, 00:08:54.503 "driver_specific": {}, 00:08:54.503 "memory_domains": [ 00:08:54.503 { 00:08:54.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.503 "dma_device_type": 2 00:08:54.503 } 00:08:54.503 ], 00:08:54.503 "name": "Malloc0", 00:08:54.503 "num_blocks": 16384, 00:08:54.503 "product_name": "Malloc disk", 00:08:54.503 "supported_io_types": { 00:08:54.503 "abort": true, 00:08:54.503 "compare": false, 00:08:54.503 "compare_and_write": false, 00:08:54.503 "flush": true, 00:08:54.503 "nvme_admin": false, 00:08:54.503 "nvme_io": false, 00:08:54.503 "read": true, 00:08:54.503 "reset": true, 00:08:54.503 "unmap": true, 00:08:54.503 "write": true, 00:08:54.503 "write_zeroes": true 00:08:54.503 }, 00:08:54.503 "uuid": "be0828af-2727-4544-92c2-8f82a330b716", 00:08:54.503 "zoned": false 00:08:54.503 } 00:08:54.503 ]' 00:08:54.503 11:36:27 -- rpc/rpc.sh@17 -- # jq length 00:08:54.503 11:36:27 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:54.503 11:36:27 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:54.503 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.503 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.503 [2024-11-20 11:36:27.431573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:54.503 [2024-11-20 11:36:27.431623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.503 [2024-11-20 11:36:27.431637] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2091880 00:08:54.503 [2024-11-20 11:36:27.431643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.503 [2024-11-20 11:36:27.433147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.503 [2024-11-20 11:36:27.433180] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:54.503 Passthru0 00:08:54.503 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.503 11:36:27 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:54.503 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.503 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.503 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.503 11:36:27 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:54.503 { 00:08:54.503 "aliases": [ 00:08:54.503 "be0828af-2727-4544-92c2-8f82a330b716" 00:08:54.503 ], 00:08:54.503 "assigned_rate_limits": { 00:08:54.503 "r_mbytes_per_sec": 0, 00:08:54.503 "rw_ios_per_sec": 0, 00:08:54.503 "rw_mbytes_per_sec": 0, 00:08:54.503 "w_mbytes_per_sec": 0 00:08:54.503 }, 00:08:54.503 "block_size": 512, 00:08:54.503 "claim_type": "exclusive_write", 00:08:54.503 "claimed": true, 00:08:54.503 "driver_specific": {}, 00:08:54.503 "memory_domains": [ 00:08:54.503 { 00:08:54.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.503 "dma_device_type": 2 00:08:54.503 } 00:08:54.503 ], 00:08:54.503 "name": "Malloc0", 00:08:54.503 "num_blocks": 16384, 00:08:54.503 "product_name": "Malloc disk", 00:08:54.503 "supported_io_types": { 00:08:54.503 "abort": true, 00:08:54.503 "compare": false, 00:08:54.503 "compare_and_write": false, 00:08:54.503 "flush": true, 00:08:54.503 "nvme_admin": false, 00:08:54.503 "nvme_io": false, 00:08:54.503 "read": true, 00:08:54.503 "reset": true, 00:08:54.503 "unmap": true, 00:08:54.503 "write": true, 00:08:54.503 "write_zeroes": true 00:08:54.503 }, 00:08:54.503 "uuid": "be0828af-2727-4544-92c2-8f82a330b716", 00:08:54.503 "zoned": false 00:08:54.503 }, 00:08:54.503 { 00:08:54.503 "aliases": [ 00:08:54.503 "21a6397d-3db5-5c0b-a69b-4d300dce9ec6" 00:08:54.503 ], 00:08:54.503 "assigned_rate_limits": { 00:08:54.503 "r_mbytes_per_sec": 0, 00:08:54.503 "rw_ios_per_sec": 0, 00:08:54.503 "rw_mbytes_per_sec": 0, 00:08:54.503 "w_mbytes_per_sec": 0 00:08:54.503 }, 00:08:54.503 "block_size": 512, 00:08:54.503 "claimed": false, 00:08:54.503 "driver_specific": { 00:08:54.503 "passthru": { 00:08:54.503 "base_bdev_name": "Malloc0", 00:08:54.503 "name": "Passthru0" 00:08:54.503 } 00:08:54.503 }, 00:08:54.503 "memory_domains": [ 00:08:54.503 { 00:08:54.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.504 "dma_device_type": 2 00:08:54.504 } 00:08:54.504 ], 00:08:54.504 "name": "Passthru0", 00:08:54.504 "num_blocks": 16384, 00:08:54.504 "product_name": "passthru", 00:08:54.504 "supported_io_types": { 00:08:54.504 "abort": true, 00:08:54.504 "compare": false, 00:08:54.504 "compare_and_write": false, 00:08:54.504 "flush": true, 00:08:54.504 "nvme_admin": false, 00:08:54.504 "nvme_io": false, 00:08:54.504 "read": true, 00:08:54.504 "reset": true, 00:08:54.504 "unmap": true, 00:08:54.504 "write": true, 00:08:54.504 "write_zeroes": true 00:08:54.504 }, 00:08:54.504 "uuid": "21a6397d-3db5-5c0b-a69b-4d300dce9ec6", 00:08:54.504 "zoned": false 00:08:54.504 } 00:08:54.504 ]' 00:08:54.504 11:36:27 -- rpc/rpc.sh@21 -- # jq length 00:08:54.504 11:36:27 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:54.504 11:36:27 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:54.504 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.504 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.504 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.504 11:36:27 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:54.504 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.504 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.504 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.504 11:36:27 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:54.504 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.504 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.763 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.763 11:36:27 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:54.763 11:36:27 -- rpc/rpc.sh@26 -- # jq length 00:08:54.763 11:36:27 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:54.763 00:08:54.763 real 0m0.317s 00:08:54.763 user 0m0.190s 00:08:54.763 sys 0m0.047s 00:08:54.763 11:36:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.763 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.763 ************************************ 00:08:54.763 END TEST rpc_integrity 00:08:54.763 ************************************ 00:08:54.763 11:36:27 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:54.763 11:36:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.763 11:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.763 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.763 ************************************ 00:08:54.763 START TEST rpc_plugins 00:08:54.763 ************************************ 00:08:54.763 11:36:27 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:08:54.763 11:36:27 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:54.763 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.763 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.763 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.763 11:36:27 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:54.763 11:36:27 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:54.763 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.763 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.763 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.763 11:36:27 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:54.763 { 00:08:54.763 "aliases": [ 00:08:54.763 "8f9b994c-48c6-4d82-a80f-1c5b7caa300e" 00:08:54.763 ], 00:08:54.763 "assigned_rate_limits": { 00:08:54.763 "r_mbytes_per_sec": 0, 00:08:54.763 "rw_ios_per_sec": 0, 00:08:54.763 "rw_mbytes_per_sec": 0, 00:08:54.763 "w_mbytes_per_sec": 0 00:08:54.763 }, 00:08:54.763 "block_size": 4096, 00:08:54.763 "claimed": false, 00:08:54.763 "driver_specific": {}, 00:08:54.763 "memory_domains": [ 00:08:54.763 { 00:08:54.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.763 "dma_device_type": 2 00:08:54.763 } 00:08:54.763 ], 00:08:54.763 "name": "Malloc1", 00:08:54.763 "num_blocks": 256, 00:08:54.763 "product_name": "Malloc disk", 00:08:54.763 "supported_io_types": { 00:08:54.763 "abort": true, 00:08:54.763 "compare": false, 00:08:54.763 "compare_and_write": false, 00:08:54.763 "flush": true, 00:08:54.763 "nvme_admin": false, 00:08:54.763 "nvme_io": false, 00:08:54.763 "read": true, 00:08:54.763 "reset": true, 00:08:54.763 "unmap": true, 00:08:54.763 "write": true, 00:08:54.763 "write_zeroes": true 00:08:54.763 }, 00:08:54.763 "uuid": "8f9b994c-48c6-4d82-a80f-1c5b7caa300e", 00:08:54.763 "zoned": false 00:08:54.763 } 00:08:54.763 ]' 00:08:54.763 11:36:27 -- rpc/rpc.sh@32 -- # jq length 00:08:54.763 11:36:27 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:54.764 11:36:27 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:54.764 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.764 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.764 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.764 11:36:27 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:54.764 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.764 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.764 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.764 11:36:27 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:54.764 11:36:27 -- rpc/rpc.sh@36 -- # jq length 00:08:55.022 11:36:27 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:55.023 00:08:55.023 real 0m0.152s 00:08:55.023 user 0m0.085s 00:08:55.023 sys 0m0.024s 00:08:55.023 11:36:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.023 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:55.023 ************************************ 00:08:55.023 END TEST rpc_plugins 00:08:55.023 ************************************ 00:08:55.023 11:36:27 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:55.023 11:36:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.023 11:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.023 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:55.023 ************************************ 00:08:55.023 START TEST rpc_trace_cmd_test 00:08:55.023 ************************************ 00:08:55.023 11:36:27 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:08:55.023 11:36:27 -- rpc/rpc.sh@40 -- # local info 00:08:55.023 11:36:27 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:55.023 11:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.023 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:08:55.023 11:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.023 11:36:27 -- rpc/rpc.sh@42 -- # info='{ 00:08:55.023 "bdev": { 00:08:55.023 "mask": "0x8", 00:08:55.023 "tpoint_mask": "0xffffffffffffffff" 00:08:55.023 }, 00:08:55.023 "bdev_nvme": { 00:08:55.023 "mask": "0x4000", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "blobfs": { 00:08:55.023 "mask": "0x80", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "dsa": { 00:08:55.023 "mask": "0x200", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "ftl": { 00:08:55.023 "mask": "0x40", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "iaa": { 00:08:55.023 "mask": "0x1000", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "iscsi_conn": { 00:08:55.023 "mask": "0x2", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "nvme_pcie": { 00:08:55.023 "mask": "0x800", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "nvme_tcp": { 00:08:55.023 "mask": "0x2000", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "nvmf_rdma": { 00:08:55.023 "mask": "0x10", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "nvmf_tcp": { 00:08:55.023 "mask": "0x20", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "scsi": { 00:08:55.023 "mask": "0x4", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "thread": { 00:08:55.023 "mask": "0x400", 00:08:55.023 "tpoint_mask": "0x0" 00:08:55.023 }, 00:08:55.023 "tpoint_group_mask": "0x8", 00:08:55.023 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55787" 00:08:55.023 }' 00:08:55.023 11:36:27 -- rpc/rpc.sh@43 -- # jq length 00:08:55.023 11:36:27 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:55.023 11:36:27 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:55.023 11:36:27 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:55.023 11:36:27 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:55.023 11:36:28 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:55.023 11:36:28 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:55.283 11:36:28 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:55.283 11:36:28 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:55.283 11:36:28 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:55.283 00:08:55.283 real 0m0.264s 00:08:55.283 user 0m0.212s 00:08:55.283 sys 0m0.041s 00:08:55.283 11:36:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.283 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.283 ************************************ 00:08:55.283 END TEST rpc_trace_cmd_test 00:08:55.283 ************************************ 00:08:55.283 11:36:28 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:08:55.283 11:36:28 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:08:55.283 11:36:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.283 11:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.283 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.283 ************************************ 00:08:55.283 START TEST go_rpc 00:08:55.283 ************************************ 00:08:55.283 11:36:28 -- common/autotest_common.sh@1114 -- # go_rpc 00:08:55.283 11:36:28 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:55.283 11:36:28 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:08:55.283 11:36:28 -- rpc/rpc.sh@52 -- # jq length 00:08:55.283 11:36:28 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:08:55.283 11:36:28 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:08:55.283 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.283 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.283 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.283 11:36:28 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:08:55.283 11:36:28 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:55.283 11:36:28 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["ca49f953-66bd-483b-af11-669f21843e96"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"ca49f953-66bd-483b-af11-669f21843e96","zoned":false}]' 00:08:55.283 11:36:28 -- rpc/rpc.sh@57 -- # jq length 00:08:55.541 11:36:28 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:08:55.541 11:36:28 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:55.541 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.541 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.541 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.541 11:36:28 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:55.541 11:36:28 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:08:55.541 11:36:28 -- rpc/rpc.sh@61 -- # jq length 00:08:55.541 11:36:28 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:08:55.541 00:08:55.541 real 0m0.218s 00:08:55.541 user 0m0.137s 00:08:55.541 sys 0m0.049s 00:08:55.541 11:36:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.542 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 ************************************ 00:08:55.542 END TEST go_rpc 00:08:55.542 ************************************ 00:08:55.542 11:36:28 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:55.542 11:36:28 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:55.542 11:36:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.542 11:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.542 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 ************************************ 00:08:55.542 START TEST rpc_daemon_integrity 00:08:55.542 ************************************ 00:08:55.542 11:36:28 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:08:55.542 11:36:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:55.542 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.542 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.542 11:36:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:55.542 11:36:28 -- rpc/rpc.sh@13 -- # jq length 00:08:55.542 11:36:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:55.542 11:36:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:55.542 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.542 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.542 11:36:28 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:08:55.542 11:36:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:55.542 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.542 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.542 11:36:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:55.542 { 00:08:55.542 "aliases": [ 00:08:55.542 "85624b4b-e7ba-42dd-af97-caea7686a0b3" 00:08:55.542 ], 00:08:55.542 "assigned_rate_limits": { 00:08:55.542 "r_mbytes_per_sec": 0, 00:08:55.542 "rw_ios_per_sec": 0, 00:08:55.542 "rw_mbytes_per_sec": 0, 00:08:55.542 "w_mbytes_per_sec": 0 00:08:55.542 }, 00:08:55.542 "block_size": 512, 00:08:55.542 "claimed": false, 00:08:55.542 "driver_specific": {}, 00:08:55.542 "memory_domains": [ 00:08:55.542 { 00:08:55.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.542 "dma_device_type": 2 00:08:55.542 } 00:08:55.542 ], 00:08:55.542 "name": "Malloc3", 00:08:55.542 "num_blocks": 16384, 00:08:55.542 "product_name": "Malloc disk", 00:08:55.542 "supported_io_types": { 00:08:55.542 "abort": true, 00:08:55.542 "compare": false, 00:08:55.542 "compare_and_write": false, 00:08:55.542 "flush": true, 00:08:55.542 "nvme_admin": false, 00:08:55.542 "nvme_io": false, 00:08:55.542 "read": true, 00:08:55.542 "reset": true, 00:08:55.542 "unmap": true, 00:08:55.542 "write": true, 00:08:55.542 "write_zeroes": true 00:08:55.542 }, 00:08:55.542 "uuid": "85624b4b-e7ba-42dd-af97-caea7686a0b3", 00:08:55.542 "zoned": false 00:08:55.542 } 00:08:55.542 ]' 00:08:55.542 11:36:28 -- rpc/rpc.sh@17 -- # jq length 00:08:55.802 11:36:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:55.802 11:36:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:08:55.802 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.802 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.802 [2024-11-20 11:36:28.625822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:55.802 [2024-11-20 11:36:28.625888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.802 [2024-11-20 11:36:28.625912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2282680 00:08:55.802 [2024-11-20 11:36:28.625923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.802 [2024-11-20 11:36:28.627581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.802 [2024-11-20 11:36:28.627633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:55.802 Passthru0 00:08:55.802 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.802 11:36:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:55.802 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.802 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.802 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.802 11:36:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:55.802 { 00:08:55.802 "aliases": [ 00:08:55.802 "85624b4b-e7ba-42dd-af97-caea7686a0b3" 00:08:55.802 ], 00:08:55.802 "assigned_rate_limits": { 00:08:55.802 "r_mbytes_per_sec": 0, 00:08:55.802 "rw_ios_per_sec": 0, 00:08:55.802 "rw_mbytes_per_sec": 0, 00:08:55.802 "w_mbytes_per_sec": 0 00:08:55.802 }, 00:08:55.802 "block_size": 512, 00:08:55.802 "claim_type": "exclusive_write", 00:08:55.802 "claimed": true, 00:08:55.802 "driver_specific": {}, 00:08:55.802 "memory_domains": [ 00:08:55.802 { 00:08:55.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.802 "dma_device_type": 2 00:08:55.802 } 00:08:55.802 ], 00:08:55.802 "name": "Malloc3", 00:08:55.802 "num_blocks": 16384, 00:08:55.802 "product_name": "Malloc disk", 00:08:55.802 "supported_io_types": { 00:08:55.802 "abort": true, 00:08:55.802 "compare": false, 00:08:55.802 "compare_and_write": false, 00:08:55.802 "flush": true, 00:08:55.802 "nvme_admin": false, 00:08:55.802 "nvme_io": false, 00:08:55.802 "read": true, 00:08:55.802 "reset": true, 00:08:55.802 "unmap": true, 00:08:55.802 "write": true, 00:08:55.802 "write_zeroes": true 00:08:55.802 }, 00:08:55.802 "uuid": "85624b4b-e7ba-42dd-af97-caea7686a0b3", 00:08:55.802 "zoned": false 00:08:55.802 }, 00:08:55.802 { 00:08:55.802 "aliases": [ 00:08:55.802 "76ab5f61-9cf9-5343-b2f3-2fadb506cf7e" 00:08:55.802 ], 00:08:55.802 "assigned_rate_limits": { 00:08:55.802 "r_mbytes_per_sec": 0, 00:08:55.802 "rw_ios_per_sec": 0, 00:08:55.802 "rw_mbytes_per_sec": 0, 00:08:55.802 "w_mbytes_per_sec": 0 00:08:55.802 }, 00:08:55.802 "block_size": 512, 00:08:55.802 "claimed": false, 00:08:55.802 "driver_specific": { 00:08:55.802 "passthru": { 00:08:55.802 "base_bdev_name": "Malloc3", 00:08:55.802 "name": "Passthru0" 00:08:55.802 } 00:08:55.802 }, 00:08:55.802 "memory_domains": [ 00:08:55.802 { 00:08:55.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.802 "dma_device_type": 2 00:08:55.802 } 00:08:55.802 ], 00:08:55.802 "name": "Passthru0", 00:08:55.802 "num_blocks": 16384, 00:08:55.802 "product_name": "passthru", 00:08:55.802 "supported_io_types": { 00:08:55.802 "abort": true, 00:08:55.802 "compare": false, 00:08:55.802 "compare_and_write": false, 00:08:55.802 "flush": true, 00:08:55.802 "nvme_admin": false, 00:08:55.802 "nvme_io": false, 00:08:55.802 "read": true, 00:08:55.802 "reset": true, 00:08:55.802 "unmap": true, 00:08:55.802 "write": true, 00:08:55.802 "write_zeroes": true 00:08:55.802 }, 00:08:55.802 "uuid": "76ab5f61-9cf9-5343-b2f3-2fadb506cf7e", 00:08:55.802 "zoned": false 00:08:55.802 } 00:08:55.802 ]' 00:08:55.802 11:36:28 -- rpc/rpc.sh@21 -- # jq length 00:08:55.802 11:36:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:55.802 11:36:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:55.802 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.802 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.802 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.802 11:36:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:08:55.802 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.802 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.802 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.802 11:36:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:55.802 11:36:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.802 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.802 11:36:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.802 11:36:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:55.802 11:36:28 -- rpc/rpc.sh@26 -- # jq length 00:08:55.802 ************************************ 00:08:55.802 END TEST rpc_daemon_integrity 00:08:55.802 ************************************ 00:08:55.802 11:36:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:55.802 00:08:55.802 real 0m0.308s 00:08:55.802 user 0m0.189s 00:08:55.802 sys 0m0.046s 00:08:55.802 11:36:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.802 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:55.802 11:36:28 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:55.802 11:36:28 -- rpc/rpc.sh@84 -- # killprocess 55787 00:08:55.802 11:36:28 -- common/autotest_common.sh@936 -- # '[' -z 55787 ']' 00:08:55.802 11:36:28 -- common/autotest_common.sh@940 -- # kill -0 55787 00:08:55.802 11:36:28 -- common/autotest_common.sh@941 -- # uname 00:08:55.802 11:36:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.802 11:36:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55787 00:08:56.061 killing process with pid 55787 00:08:56.061 11:36:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:56.061 11:36:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:56.061 11:36:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55787' 00:08:56.061 11:36:28 -- common/autotest_common.sh@955 -- # kill 55787 00:08:56.061 11:36:28 -- common/autotest_common.sh@960 -- # wait 55787 00:08:56.328 00:08:56.328 real 0m3.097s 00:08:56.328 user 0m3.932s 00:08:56.328 sys 0m0.828s 00:08:56.328 11:36:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.328 ************************************ 00:08:56.328 END TEST rpc 00:08:56.328 ************************************ 00:08:56.328 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:56.328 11:36:29 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:56.328 11:36:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.328 11:36:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.328 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:56.328 ************************************ 00:08:56.328 START TEST rpc_client 00:08:56.328 ************************************ 00:08:56.328 11:36:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:56.591 * Looking for test storage... 00:08:56.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:56.591 11:36:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:56.591 11:36:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:56.591 11:36:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:56.591 11:36:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:56.591 11:36:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:56.591 11:36:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:56.591 11:36:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:56.591 11:36:29 -- scripts/common.sh@335 -- # IFS=.-: 00:08:56.591 11:36:29 -- scripts/common.sh@335 -- # read -ra ver1 00:08:56.591 11:36:29 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.591 11:36:29 -- scripts/common.sh@336 -- # read -ra ver2 00:08:56.591 11:36:29 -- scripts/common.sh@337 -- # local 'op=<' 00:08:56.591 11:36:29 -- scripts/common.sh@339 -- # ver1_l=2 00:08:56.591 11:36:29 -- scripts/common.sh@340 -- # ver2_l=1 00:08:56.591 11:36:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:56.591 11:36:29 -- scripts/common.sh@343 -- # case "$op" in 00:08:56.591 11:36:29 -- scripts/common.sh@344 -- # : 1 00:08:56.591 11:36:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:56.591 11:36:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.591 11:36:29 -- scripts/common.sh@364 -- # decimal 1 00:08:56.591 11:36:29 -- scripts/common.sh@352 -- # local d=1 00:08:56.591 11:36:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.591 11:36:29 -- scripts/common.sh@354 -- # echo 1 00:08:56.591 11:36:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:56.591 11:36:29 -- scripts/common.sh@365 -- # decimal 2 00:08:56.591 11:36:29 -- scripts/common.sh@352 -- # local d=2 00:08:56.591 11:36:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.591 11:36:29 -- scripts/common.sh@354 -- # echo 2 00:08:56.591 11:36:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:56.591 11:36:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:56.591 11:36:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:56.591 11:36:29 -- scripts/common.sh@367 -- # return 0 00:08:56.591 11:36:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.591 11:36:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:56.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.591 --rc genhtml_branch_coverage=1 00:08:56.591 --rc genhtml_function_coverage=1 00:08:56.591 --rc genhtml_legend=1 00:08:56.591 --rc geninfo_all_blocks=1 00:08:56.591 --rc geninfo_unexecuted_blocks=1 00:08:56.591 00:08:56.591 ' 00:08:56.591 11:36:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:56.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.591 --rc genhtml_branch_coverage=1 00:08:56.591 --rc genhtml_function_coverage=1 00:08:56.591 --rc genhtml_legend=1 00:08:56.591 --rc geninfo_all_blocks=1 00:08:56.591 --rc geninfo_unexecuted_blocks=1 00:08:56.591 00:08:56.591 ' 00:08:56.591 11:36:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:56.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.591 --rc genhtml_branch_coverage=1 00:08:56.591 --rc genhtml_function_coverage=1 00:08:56.591 --rc genhtml_legend=1 00:08:56.591 --rc geninfo_all_blocks=1 00:08:56.591 --rc geninfo_unexecuted_blocks=1 00:08:56.591 00:08:56.591 ' 00:08:56.591 11:36:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:56.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.591 --rc genhtml_branch_coverage=1 00:08:56.591 --rc genhtml_function_coverage=1 00:08:56.591 --rc genhtml_legend=1 00:08:56.591 --rc geninfo_all_blocks=1 00:08:56.591 --rc geninfo_unexecuted_blocks=1 00:08:56.591 00:08:56.591 ' 00:08:56.591 11:36:29 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:56.591 OK 00:08:56.591 11:36:29 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:56.591 00:08:56.591 real 0m0.237s 00:08:56.591 user 0m0.131s 00:08:56.591 sys 0m0.119s 00:08:56.591 11:36:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.591 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:56.591 ************************************ 00:08:56.591 END TEST rpc_client 00:08:56.591 ************************************ 00:08:56.591 11:36:29 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:56.591 11:36:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.591 11:36:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.591 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:56.591 ************************************ 00:08:56.591 START TEST json_config 00:08:56.591 ************************************ 00:08:56.591 11:36:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:56.852 11:36:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:56.852 11:36:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:56.852 11:36:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:56.852 11:36:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:56.852 11:36:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:56.852 11:36:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:56.852 11:36:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:56.852 11:36:29 -- scripts/common.sh@335 -- # IFS=.-: 00:08:56.852 11:36:29 -- scripts/common.sh@335 -- # read -ra ver1 00:08:56.852 11:36:29 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.852 11:36:29 -- scripts/common.sh@336 -- # read -ra ver2 00:08:56.852 11:36:29 -- scripts/common.sh@337 -- # local 'op=<' 00:08:56.852 11:36:29 -- scripts/common.sh@339 -- # ver1_l=2 00:08:56.852 11:36:29 -- scripts/common.sh@340 -- # ver2_l=1 00:08:56.852 11:36:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:56.852 11:36:29 -- scripts/common.sh@343 -- # case "$op" in 00:08:56.852 11:36:29 -- scripts/common.sh@344 -- # : 1 00:08:56.852 11:36:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:56.852 11:36:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.852 11:36:29 -- scripts/common.sh@364 -- # decimal 1 00:08:56.852 11:36:29 -- scripts/common.sh@352 -- # local d=1 00:08:56.852 11:36:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.852 11:36:29 -- scripts/common.sh@354 -- # echo 1 00:08:56.852 11:36:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:56.852 11:36:29 -- scripts/common.sh@365 -- # decimal 2 00:08:56.852 11:36:29 -- scripts/common.sh@352 -- # local d=2 00:08:56.852 11:36:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.852 11:36:29 -- scripts/common.sh@354 -- # echo 2 00:08:56.852 11:36:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:56.852 11:36:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:56.852 11:36:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:56.852 11:36:29 -- scripts/common.sh@367 -- # return 0 00:08:56.852 11:36:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.852 11:36:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.852 --rc genhtml_branch_coverage=1 00:08:56.852 --rc genhtml_function_coverage=1 00:08:56.852 --rc genhtml_legend=1 00:08:56.852 --rc geninfo_all_blocks=1 00:08:56.852 --rc geninfo_unexecuted_blocks=1 00:08:56.852 00:08:56.852 ' 00:08:56.852 11:36:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.852 --rc genhtml_branch_coverage=1 00:08:56.852 --rc genhtml_function_coverage=1 00:08:56.852 --rc genhtml_legend=1 00:08:56.852 --rc geninfo_all_blocks=1 00:08:56.852 --rc geninfo_unexecuted_blocks=1 00:08:56.852 00:08:56.852 ' 00:08:56.852 11:36:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.852 --rc genhtml_branch_coverage=1 00:08:56.852 --rc genhtml_function_coverage=1 00:08:56.852 --rc genhtml_legend=1 00:08:56.852 --rc geninfo_all_blocks=1 00:08:56.852 --rc geninfo_unexecuted_blocks=1 00:08:56.852 00:08:56.852 ' 00:08:56.852 11:36:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.852 --rc genhtml_branch_coverage=1 00:08:56.852 --rc genhtml_function_coverage=1 00:08:56.852 --rc genhtml_legend=1 00:08:56.852 --rc geninfo_all_blocks=1 00:08:56.852 --rc geninfo_unexecuted_blocks=1 00:08:56.852 00:08:56.852 ' 00:08:56.852 11:36:29 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.852 11:36:29 -- nvmf/common.sh@7 -- # uname -s 00:08:56.852 11:36:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.852 11:36:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.852 11:36:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.852 11:36:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.852 11:36:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.852 11:36:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.852 11:36:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.852 11:36:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.852 11:36:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.852 11:36:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.852 11:36:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:08:56.852 11:36:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:08:56.852 11:36:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.852 11:36:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.852 11:36:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:56.852 11:36:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.852 11:36:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.852 11:36:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.852 11:36:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.852 11:36:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 11:36:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 11:36:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 11:36:29 -- paths/export.sh@5 -- # export PATH 00:08:56.852 11:36:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 11:36:29 -- nvmf/common.sh@46 -- # : 0 00:08:56.852 11:36:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:56.852 11:36:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:56.852 11:36:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:56.852 11:36:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.852 11:36:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.852 11:36:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:56.852 11:36:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:56.852 11:36:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:56.852 11:36:29 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:56.853 11:36:29 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:08:56.853 11:36:29 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:56.853 11:36:29 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:56.853 11:36:29 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:56.853 11:36:29 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:56.853 11:36:29 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:56.853 11:36:29 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:56.853 11:36:29 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:56.853 INFO: JSON configuration test init 00:08:56.853 11:36:29 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:56.853 11:36:29 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:56.853 11:36:29 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:56.853 11:36:29 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:56.853 11:36:29 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:56.853 11:36:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.853 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:56.853 11:36:29 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:56.853 11:36:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.853 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:56.853 11:36:29 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:56.853 11:36:29 -- json_config/json_config.sh@98 -- # local app=target 00:08:56.853 11:36:29 -- json_config/json_config.sh@99 -- # shift 00:08:56.853 11:36:29 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:56.853 11:36:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:56.853 11:36:29 -- json_config/json_config.sh@111 -- # app_pid[$app]=56103 00:08:56.853 11:36:29 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:56.853 Waiting for target to run... 00:08:56.853 11:36:29 -- json_config/json_config.sh@114 -- # waitforlisten 56103 /var/tmp/spdk_tgt.sock 00:08:56.853 11:36:29 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:56.853 11:36:29 -- common/autotest_common.sh@829 -- # '[' -z 56103 ']' 00:08:56.853 11:36:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:56.853 11:36:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.853 11:36:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:56.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:56.853 11:36:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.853 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:57.112 [2024-11-20 11:36:29.898969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.112 [2024-11-20 11:36:29.899168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56103 ] 00:08:57.372 [2024-11-20 11:36:30.268004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.372 [2024-11-20 11:36:30.357123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.372 [2024-11-20 11:36:30.357372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.939 11:36:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.939 11:36:30 -- common/autotest_common.sh@862 -- # return 0 00:08:57.939 11:36:30 -- json_config/json_config.sh@115 -- # echo '' 00:08:57.939 00:08:57.939 11:36:30 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:57.939 11:36:30 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:57.939 11:36:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.939 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:08:57.939 11:36:30 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:57.939 11:36:30 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:57.939 11:36:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.939 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:08:57.939 11:36:30 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:57.940 11:36:30 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:57.940 11:36:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:58.507 11:36:31 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:58.507 11:36:31 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:58.507 11:36:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.508 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:08:58.508 11:36:31 -- json_config/json_config.sh@48 -- # local ret=0 00:08:58.508 11:36:31 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:58.508 11:36:31 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:58.508 11:36:31 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:58.508 11:36:31 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:58.508 11:36:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:58.508 11:36:31 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:58.508 11:36:31 -- json_config/json_config.sh@51 -- # local get_types 00:08:58.508 11:36:31 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:58.508 11:36:31 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:58.508 11:36:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.508 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:08:58.767 11:36:31 -- json_config/json_config.sh@58 -- # return 0 00:08:58.767 11:36:31 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:08:58.767 11:36:31 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:58.767 11:36:31 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:58.767 11:36:31 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:08:58.767 11:36:31 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:08:58.767 11:36:31 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:08:58.767 11:36:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.767 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:08:58.767 11:36:31 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:58.767 11:36:31 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:08:58.767 11:36:31 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:08:58.767 11:36:31 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:58.767 11:36:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:58.767 MallocForNvmf0 00:08:58.767 11:36:31 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:58.767 11:36:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:59.025 MallocForNvmf1 00:08:59.025 11:36:32 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:59.025 11:36:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:59.284 [2024-11-20 11:36:32.274096] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.284 11:36:32 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.284 11:36:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.542 11:36:32 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:59.542 11:36:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:59.802 11:36:32 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:59.802 11:36:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:00.061 11:36:32 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:00.061 11:36:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:00.323 [2024-11-20 11:36:33.168967] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:00.323 11:36:33 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:09:00.323 11:36:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.323 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:00.323 11:36:33 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:09:00.323 11:36:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.323 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:00.323 11:36:33 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:09:00.323 11:36:33 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:00.323 11:36:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:00.581 MallocBdevForConfigChangeCheck 00:09:00.581 11:36:33 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:09:00.581 11:36:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.581 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:00.581 11:36:33 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:09:00.581 11:36:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:01.150 INFO: shutting down applications... 00:09:01.150 11:36:33 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:09:01.150 11:36:33 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:09:01.150 11:36:33 -- json_config/json_config.sh@431 -- # json_config_clear target 00:09:01.150 11:36:33 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:09:01.150 11:36:33 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:01.409 Calling clear_iscsi_subsystem 00:09:01.409 Calling clear_nvmf_subsystem 00:09:01.409 Calling clear_nbd_subsystem 00:09:01.409 Calling clear_ublk_subsystem 00:09:01.409 Calling clear_vhost_blk_subsystem 00:09:01.409 Calling clear_vhost_scsi_subsystem 00:09:01.409 Calling clear_scheduler_subsystem 00:09:01.409 Calling clear_bdev_subsystem 00:09:01.409 Calling clear_accel_subsystem 00:09:01.409 Calling clear_vmd_subsystem 00:09:01.409 Calling clear_sock_subsystem 00:09:01.409 Calling clear_iobuf_subsystem 00:09:01.409 11:36:34 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:01.409 11:36:34 -- json_config/json_config.sh@396 -- # count=100 00:09:01.409 11:36:34 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:09:01.409 11:36:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:01.409 11:36:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:01.409 11:36:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:01.668 11:36:34 -- json_config/json_config.sh@398 -- # break 00:09:01.668 11:36:34 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:09:01.668 11:36:34 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:09:01.668 11:36:34 -- json_config/json_config.sh@120 -- # local app=target 00:09:01.668 11:36:34 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:09:01.668 11:36:34 -- json_config/json_config.sh@124 -- # [[ -n 56103 ]] 00:09:01.668 11:36:34 -- json_config/json_config.sh@127 -- # kill -SIGINT 56103 00:09:01.668 11:36:34 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:09:01.668 11:36:34 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:01.668 11:36:34 -- json_config/json_config.sh@130 -- # kill -0 56103 00:09:01.668 11:36:34 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:02.235 11:36:35 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:02.235 11:36:35 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:02.235 11:36:35 -- json_config/json_config.sh@130 -- # kill -0 56103 00:09:02.235 11:36:35 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:02.235 11:36:35 -- json_config/json_config.sh@132 -- # break 00:09:02.235 11:36:35 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:02.235 SPDK target shutdown done 00:09:02.235 11:36:35 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:02.236 INFO: relaunching applications... 00:09:02.236 11:36:35 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:02.236 11:36:35 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:02.236 11:36:35 -- json_config/json_config.sh@98 -- # local app=target 00:09:02.236 11:36:35 -- json_config/json_config.sh@99 -- # shift 00:09:02.236 11:36:35 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:02.236 11:36:35 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:02.236 11:36:35 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:02.236 11:36:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:02.236 11:36:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:02.236 11:36:35 -- json_config/json_config.sh@111 -- # app_pid[$app]=56373 00:09:02.236 11:36:35 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:02.236 Waiting for target to run... 00:09:02.236 11:36:35 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:02.236 11:36:35 -- json_config/json_config.sh@114 -- # waitforlisten 56373 /var/tmp/spdk_tgt.sock 00:09:02.236 11:36:35 -- common/autotest_common.sh@829 -- # '[' -z 56373 ']' 00:09:02.236 11:36:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:02.236 11:36:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:02.236 11:36:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:02.236 11:36:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.236 11:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:02.236 [2024-11-20 11:36:35.172435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.236 [2024-11-20 11:36:35.172506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56373 ] 00:09:02.494 [2024-11-20 11:36:35.517828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.753 [2024-11-20 11:36:35.600556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.753 [2024-11-20 11:36:35.600730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.017 [2024-11-20 11:36:35.901591] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.017 [2024-11-20 11:36:35.933624] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:03.017 11:36:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.017 11:36:36 -- common/autotest_common.sh@862 -- # return 0 00:09:03.017 00:09:03.017 11:36:36 -- json_config/json_config.sh@115 -- # echo '' 00:09:03.017 11:36:36 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:03.017 INFO: Checking if target configuration is the same... 00:09:03.017 11:36:36 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:03.017 11:36:36 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:03.017 11:36:36 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:03.017 11:36:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.017 + '[' 2 -ne 2 ']' 00:09:03.017 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:03.018 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:03.276 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:03.276 +++ basename /dev/fd/62 00:09:03.276 ++ mktemp /tmp/62.XXX 00:09:03.276 + tmp_file_1=/tmp/62.9M5 00:09:03.276 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:03.276 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.276 + tmp_file_2=/tmp/spdk_tgt_config.json.b1K 00:09:03.276 + ret=0 00:09:03.276 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:03.535 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:03.535 + diff -u /tmp/62.9M5 /tmp/spdk_tgt_config.json.b1K 00:09:03.535 INFO: JSON config files are the same 00:09:03.535 + echo 'INFO: JSON config files are the same' 00:09:03.535 + rm /tmp/62.9M5 /tmp/spdk_tgt_config.json.b1K 00:09:03.535 + exit 0 00:09:03.535 11:36:36 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:03.535 INFO: changing configuration and checking if this can be detected... 00:09:03.535 11:36:36 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:03.535 11:36:36 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.535 11:36:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.806 11:36:36 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:03.806 11:36:36 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:03.806 11:36:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.806 + '[' 2 -ne 2 ']' 00:09:03.806 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:03.806 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:03.806 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:03.806 +++ basename /dev/fd/62 00:09:03.806 ++ mktemp /tmp/62.XXX 00:09:03.806 + tmp_file_1=/tmp/62.VUQ 00:09:03.806 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:03.806 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.806 + tmp_file_2=/tmp/spdk_tgt_config.json.fpk 00:09:03.806 + ret=0 00:09:03.806 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:04.064 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:04.323 + diff -u /tmp/62.VUQ /tmp/spdk_tgt_config.json.fpk 00:09:04.323 + ret=1 00:09:04.323 + echo '=== Start of file: /tmp/62.VUQ ===' 00:09:04.323 + cat /tmp/62.VUQ 00:09:04.323 + echo '=== End of file: /tmp/62.VUQ ===' 00:09:04.323 + echo '' 00:09:04.323 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fpk ===' 00:09:04.323 + cat /tmp/spdk_tgt_config.json.fpk 00:09:04.323 + echo '=== End of file: /tmp/spdk_tgt_config.json.fpk ===' 00:09:04.323 + echo '' 00:09:04.323 + rm /tmp/62.VUQ /tmp/spdk_tgt_config.json.fpk 00:09:04.323 + exit 1 00:09:04.323 INFO: configuration change detected. 00:09:04.323 11:36:37 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:04.323 11:36:37 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:04.323 11:36:37 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:04.323 11:36:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.323 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.323 11:36:37 -- json_config/json_config.sh@360 -- # local ret=0 00:09:04.323 11:36:37 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:04.323 11:36:37 -- json_config/json_config.sh@370 -- # [[ -n 56373 ]] 00:09:04.323 11:36:37 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:04.323 11:36:37 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:04.323 11:36:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.323 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.323 11:36:37 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:09:04.323 11:36:37 -- json_config/json_config.sh@246 -- # uname -s 00:09:04.323 11:36:37 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:04.323 11:36:37 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:04.323 11:36:37 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:04.323 11:36:37 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:04.323 11:36:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.323 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.323 11:36:37 -- json_config/json_config.sh@376 -- # killprocess 56373 00:09:04.323 11:36:37 -- common/autotest_common.sh@936 -- # '[' -z 56373 ']' 00:09:04.323 11:36:37 -- common/autotest_common.sh@940 -- # kill -0 56373 00:09:04.323 11:36:37 -- common/autotest_common.sh@941 -- # uname 00:09:04.323 11:36:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.323 11:36:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56373 00:09:04.323 11:36:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:04.323 11:36:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:04.323 killing process with pid 56373 00:09:04.323 11:36:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56373' 00:09:04.323 11:36:37 -- common/autotest_common.sh@955 -- # kill 56373 00:09:04.323 11:36:37 -- common/autotest_common.sh@960 -- # wait 56373 00:09:04.582 11:36:37 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:04.582 11:36:37 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:04.582 11:36:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.582 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.582 11:36:37 -- json_config/json_config.sh@381 -- # return 0 00:09:04.582 INFO: Success 00:09:04.582 11:36:37 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:04.582 00:09:04.582 real 0m7.946s 00:09:04.582 user 0m10.990s 00:09:04.582 sys 0m1.934s 00:09:04.582 11:36:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.582 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.582 ************************************ 00:09:04.582 END TEST json_config 00:09:04.582 ************************************ 00:09:04.582 11:36:37 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:04.582 11:36:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.582 11:36:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.582 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.582 ************************************ 00:09:04.582 START TEST json_config_extra_key 00:09:04.582 ************************************ 00:09:04.582 11:36:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:04.840 11:36:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.840 11:36:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.840 11:36:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.840 11:36:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.840 11:36:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.840 11:36:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.840 11:36:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.840 11:36:37 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.841 11:36:37 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.841 11:36:37 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.841 11:36:37 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.841 11:36:37 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.841 11:36:37 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.841 11:36:37 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.841 11:36:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.841 11:36:37 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.841 11:36:37 -- scripts/common.sh@344 -- # : 1 00:09:04.841 11:36:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.841 11:36:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.841 11:36:37 -- scripts/common.sh@364 -- # decimal 1 00:09:04.841 11:36:37 -- scripts/common.sh@352 -- # local d=1 00:09:04.841 11:36:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.841 11:36:37 -- scripts/common.sh@354 -- # echo 1 00:09:04.841 11:36:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.841 11:36:37 -- scripts/common.sh@365 -- # decimal 2 00:09:04.841 11:36:37 -- scripts/common.sh@352 -- # local d=2 00:09:04.841 11:36:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.841 11:36:37 -- scripts/common.sh@354 -- # echo 2 00:09:04.841 11:36:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.841 11:36:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.841 11:36:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.841 11:36:37 -- scripts/common.sh@367 -- # return 0 00:09:04.841 11:36:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.841 11:36:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.841 --rc genhtml_branch_coverage=1 00:09:04.841 --rc genhtml_function_coverage=1 00:09:04.841 --rc genhtml_legend=1 00:09:04.841 --rc geninfo_all_blocks=1 00:09:04.841 --rc geninfo_unexecuted_blocks=1 00:09:04.841 00:09:04.841 ' 00:09:04.841 11:36:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.841 --rc genhtml_branch_coverage=1 00:09:04.841 --rc genhtml_function_coverage=1 00:09:04.841 --rc genhtml_legend=1 00:09:04.841 --rc geninfo_all_blocks=1 00:09:04.841 --rc geninfo_unexecuted_blocks=1 00:09:04.841 00:09:04.841 ' 00:09:04.841 11:36:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.841 --rc genhtml_branch_coverage=1 00:09:04.841 --rc genhtml_function_coverage=1 00:09:04.841 --rc genhtml_legend=1 00:09:04.841 --rc geninfo_all_blocks=1 00:09:04.841 --rc geninfo_unexecuted_blocks=1 00:09:04.841 00:09:04.841 ' 00:09:04.841 11:36:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.841 --rc genhtml_branch_coverage=1 00:09:04.841 --rc genhtml_function_coverage=1 00:09:04.841 --rc genhtml_legend=1 00:09:04.841 --rc geninfo_all_blocks=1 00:09:04.841 --rc geninfo_unexecuted_blocks=1 00:09:04.841 00:09:04.841 ' 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.841 11:36:37 -- nvmf/common.sh@7 -- # uname -s 00:09:04.841 11:36:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.841 11:36:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.841 11:36:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.841 11:36:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.841 11:36:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.841 11:36:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.841 11:36:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.841 11:36:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.841 11:36:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.841 11:36:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.841 11:36:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:09:04.841 11:36:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:09:04.841 11:36:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.841 11:36:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.841 11:36:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:04.841 11:36:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.841 11:36:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.841 11:36:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.841 11:36:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.841 11:36:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.841 11:36:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.841 11:36:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.841 11:36:37 -- paths/export.sh@5 -- # export PATH 00:09:04.841 11:36:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.841 11:36:37 -- nvmf/common.sh@46 -- # : 0 00:09:04.841 11:36:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:04.841 11:36:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:04.841 11:36:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:04.841 11:36:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.841 11:36:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.841 11:36:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:04.841 11:36:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:04.841 11:36:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:04.841 INFO: launching applications... 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56555 00:09:04.841 Waiting for target to run... 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56555 /var/tmp/spdk_tgt.sock 00:09:04.841 11:36:37 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:04.841 11:36:37 -- common/autotest_common.sh@829 -- # '[' -z 56555 ']' 00:09:04.841 11:36:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:04.841 11:36:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:04.841 11:36:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:04.841 11:36:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.841 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:04.841 [2024-11-20 11:36:37.865963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.841 [2024-11-20 11:36:37.866045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56555 ] 00:09:05.409 [2024-11-20 11:36:38.221520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.409 [2024-11-20 11:36:38.303694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:05.409 [2024-11-20 11:36:38.303844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.977 00:09:05.977 INFO: shutting down applications... 00:09:05.977 11:36:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.977 11:36:38 -- common/autotest_common.sh@862 -- # return 0 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56555 ]] 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56555 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56555 00:09:05.977 11:36:38 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56555 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:06.544 SPDK target shutdown done 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:06.544 Success 00:09:06.544 11:36:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:06.544 00:09:06.544 real 0m1.729s 00:09:06.544 user 0m1.588s 00:09:06.544 sys 0m0.406s 00:09:06.544 11:36:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:06.544 11:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:06.544 ************************************ 00:09:06.544 END TEST json_config_extra_key 00:09:06.544 ************************************ 00:09:06.544 11:36:39 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:06.544 11:36:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:06.544 11:36:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.544 11:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:06.544 ************************************ 00:09:06.544 START TEST alias_rpc 00:09:06.544 ************************************ 00:09:06.544 11:36:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:06.544 * Looking for test storage... 00:09:06.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:06.544 11:36:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:06.544 11:36:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:06.544 11:36:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:06.544 11:36:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:06.544 11:36:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:06.544 11:36:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:06.544 11:36:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:06.544 11:36:39 -- scripts/common.sh@335 -- # IFS=.-: 00:09:06.544 11:36:39 -- scripts/common.sh@335 -- # read -ra ver1 00:09:06.544 11:36:39 -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.544 11:36:39 -- scripts/common.sh@336 -- # read -ra ver2 00:09:06.544 11:36:39 -- scripts/common.sh@337 -- # local 'op=<' 00:09:06.544 11:36:39 -- scripts/common.sh@339 -- # ver1_l=2 00:09:06.544 11:36:39 -- scripts/common.sh@340 -- # ver2_l=1 00:09:06.544 11:36:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:06.544 11:36:39 -- scripts/common.sh@343 -- # case "$op" in 00:09:06.544 11:36:39 -- scripts/common.sh@344 -- # : 1 00:09:06.544 11:36:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:06.544 11:36:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.544 11:36:39 -- scripts/common.sh@364 -- # decimal 1 00:09:06.544 11:36:39 -- scripts/common.sh@352 -- # local d=1 00:09:06.544 11:36:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.544 11:36:39 -- scripts/common.sh@354 -- # echo 1 00:09:06.544 11:36:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:06.545 11:36:39 -- scripts/common.sh@365 -- # decimal 2 00:09:06.545 11:36:39 -- scripts/common.sh@352 -- # local d=2 00:09:06.545 11:36:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.545 11:36:39 -- scripts/common.sh@354 -- # echo 2 00:09:06.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.545 11:36:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:06.545 11:36:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:06.545 11:36:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:06.545 11:36:39 -- scripts/common.sh@367 -- # return 0 00:09:06.545 11:36:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.545 11:36:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:06.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.545 --rc genhtml_branch_coverage=1 00:09:06.545 --rc genhtml_function_coverage=1 00:09:06.545 --rc genhtml_legend=1 00:09:06.545 --rc geninfo_all_blocks=1 00:09:06.545 --rc geninfo_unexecuted_blocks=1 00:09:06.545 00:09:06.545 ' 00:09:06.545 11:36:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:06.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.545 --rc genhtml_branch_coverage=1 00:09:06.545 --rc genhtml_function_coverage=1 00:09:06.545 --rc genhtml_legend=1 00:09:06.545 --rc geninfo_all_blocks=1 00:09:06.545 --rc geninfo_unexecuted_blocks=1 00:09:06.545 00:09:06.545 ' 00:09:06.545 11:36:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:06.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.545 --rc genhtml_branch_coverage=1 00:09:06.545 --rc genhtml_function_coverage=1 00:09:06.545 --rc genhtml_legend=1 00:09:06.545 --rc geninfo_all_blocks=1 00:09:06.545 --rc geninfo_unexecuted_blocks=1 00:09:06.545 00:09:06.545 ' 00:09:06.545 11:36:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:06.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.545 --rc genhtml_branch_coverage=1 00:09:06.545 --rc genhtml_function_coverage=1 00:09:06.545 --rc genhtml_legend=1 00:09:06.545 --rc geninfo_all_blocks=1 00:09:06.545 --rc geninfo_unexecuted_blocks=1 00:09:06.545 00:09:06.545 ' 00:09:06.545 11:36:39 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:06.545 11:36:39 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56640 00:09:06.545 11:36:39 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56640 00:09:06.545 11:36:39 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:06.545 11:36:39 -- common/autotest_common.sh@829 -- # '[' -z 56640 ']' 00:09:06.545 11:36:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.545 11:36:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.545 11:36:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.545 11:36:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.545 11:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:06.804 [2024-11-20 11:36:39.621479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.804 [2024-11-20 11:36:39.621578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56640 ] 00:09:06.804 [2024-11-20 11:36:39.742247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.804 [2024-11-20 11:36:39.844000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:06.804 [2024-11-20 11:36:39.844146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.739 11:36:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.739 11:36:40 -- common/autotest_common.sh@862 -- # return 0 00:09:07.739 11:36:40 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:07.999 11:36:40 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56640 00:09:07.999 11:36:40 -- common/autotest_common.sh@936 -- # '[' -z 56640 ']' 00:09:07.999 11:36:40 -- common/autotest_common.sh@940 -- # kill -0 56640 00:09:07.999 11:36:40 -- common/autotest_common.sh@941 -- # uname 00:09:07.999 11:36:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.999 11:36:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56640 00:09:07.999 killing process with pid 56640 00:09:07.999 11:36:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.999 11:36:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.999 11:36:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56640' 00:09:07.999 11:36:40 -- common/autotest_common.sh@955 -- # kill 56640 00:09:07.999 11:36:40 -- common/autotest_common.sh@960 -- # wait 56640 00:09:08.258 00:09:08.258 real 0m1.823s 00:09:08.258 user 0m2.050s 00:09:08.258 sys 0m0.417s 00:09:08.258 11:36:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.258 11:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.258 ************************************ 00:09:08.258 END TEST alias_rpc 00:09:08.258 ************************************ 00:09:08.258 11:36:41 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:09:08.258 11:36:41 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:08.258 11:36:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.258 11:36:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.258 11:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.258 ************************************ 00:09:08.258 START TEST dpdk_mem_utility 00:09:08.258 ************************************ 00:09:08.258 11:36:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:08.518 * Looking for test storage... 00:09:08.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:08.518 11:36:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.518 11:36:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.518 11:36:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.518 11:36:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.518 11:36:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.518 11:36:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.518 11:36:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.518 11:36:41 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.518 11:36:41 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.518 11:36:41 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.518 11:36:41 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.518 11:36:41 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.518 11:36:41 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.518 11:36:41 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.518 11:36:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.518 11:36:41 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.518 11:36:41 -- scripts/common.sh@344 -- # : 1 00:09:08.518 11:36:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.518 11:36:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.518 11:36:41 -- scripts/common.sh@364 -- # decimal 1 00:09:08.518 11:36:41 -- scripts/common.sh@352 -- # local d=1 00:09:08.518 11:36:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.518 11:36:41 -- scripts/common.sh@354 -- # echo 1 00:09:08.518 11:36:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.518 11:36:41 -- scripts/common.sh@365 -- # decimal 2 00:09:08.518 11:36:41 -- scripts/common.sh@352 -- # local d=2 00:09:08.518 11:36:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.518 11:36:41 -- scripts/common.sh@354 -- # echo 2 00:09:08.518 11:36:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.518 11:36:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.518 11:36:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.518 11:36:41 -- scripts/common.sh@367 -- # return 0 00:09:08.518 11:36:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.518 11:36:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.518 --rc genhtml_branch_coverage=1 00:09:08.518 --rc genhtml_function_coverage=1 00:09:08.518 --rc genhtml_legend=1 00:09:08.518 --rc geninfo_all_blocks=1 00:09:08.518 --rc geninfo_unexecuted_blocks=1 00:09:08.518 00:09:08.518 ' 00:09:08.518 11:36:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.518 --rc genhtml_branch_coverage=1 00:09:08.518 --rc genhtml_function_coverage=1 00:09:08.518 --rc genhtml_legend=1 00:09:08.518 --rc geninfo_all_blocks=1 00:09:08.518 --rc geninfo_unexecuted_blocks=1 00:09:08.518 00:09:08.518 ' 00:09:08.518 11:36:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.518 --rc genhtml_branch_coverage=1 00:09:08.518 --rc genhtml_function_coverage=1 00:09:08.518 --rc genhtml_legend=1 00:09:08.518 --rc geninfo_all_blocks=1 00:09:08.518 --rc geninfo_unexecuted_blocks=1 00:09:08.518 00:09:08.518 ' 00:09:08.518 11:36:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.518 --rc genhtml_branch_coverage=1 00:09:08.518 --rc genhtml_function_coverage=1 00:09:08.518 --rc genhtml_legend=1 00:09:08.518 --rc geninfo_all_blocks=1 00:09:08.518 --rc geninfo_unexecuted_blocks=1 00:09:08.518 00:09:08.518 ' 00:09:08.518 11:36:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:08.518 11:36:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56739 00:09:08.518 11:36:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:08.518 11:36:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56739 00:09:08.518 11:36:41 -- common/autotest_common.sh@829 -- # '[' -z 56739 ']' 00:09:08.518 11:36:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.518 11:36:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.518 11:36:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.518 11:36:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.518 11:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.518 [2024-11-20 11:36:41.532529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.518 [2024-11-20 11:36:41.532629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56739 ] 00:09:08.777 [2024-11-20 11:36:41.672816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.777 [2024-11-20 11:36:41.776659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.777 [2024-11-20 11:36:41.776834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.750 11:36:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.750 11:36:42 -- common/autotest_common.sh@862 -- # return 0 00:09:09.750 11:36:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:09.750 11:36:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:09.750 11:36:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.750 11:36:42 -- common/autotest_common.sh@10 -- # set +x 00:09:09.750 { 00:09:09.750 "filename": "/tmp/spdk_mem_dump.txt" 00:09:09.750 } 00:09:09.750 11:36:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.750 11:36:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:09.750 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:09.750 1 heaps totaling size 814.000000 MiB 00:09:09.750 size: 814.000000 MiB heap id: 0 00:09:09.750 end heaps---------- 00:09:09.750 8 mempools totaling size 598.116089 MiB 00:09:09.750 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:09.750 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:09.750 size: 84.521057 MiB name: bdev_io_56739 00:09:09.750 size: 51.011292 MiB name: evtpool_56739 00:09:09.750 size: 50.003479 MiB name: msgpool_56739 00:09:09.750 size: 21.763794 MiB name: PDU_Pool 00:09:09.750 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:09.750 size: 0.026123 MiB name: Session_Pool 00:09:09.750 end mempools------- 00:09:09.750 6 memzones totaling size 4.142822 MiB 00:09:09.750 size: 1.000366 MiB name: RG_ring_0_56739 00:09:09.750 size: 1.000366 MiB name: RG_ring_1_56739 00:09:09.750 size: 1.000366 MiB name: RG_ring_4_56739 00:09:09.750 size: 1.000366 MiB name: RG_ring_5_56739 00:09:09.750 size: 0.125366 MiB name: RG_ring_2_56739 00:09:09.750 size: 0.015991 MiB name: RG_ring_3_56739 00:09:09.750 end memzones------- 00:09:09.750 11:36:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:09.750 heap id: 0 total size: 814.000000 MiB number of busy elements: 225 number of free elements: 15 00:09:09.750 list of free elements. size: 12.485657 MiB 00:09:09.750 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:09.750 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:09.750 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:09.750 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:09.750 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:09.750 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:09.750 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:09.750 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:09.750 element at address: 0x200000200000 with size: 0.837219 MiB 00:09:09.750 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:09:09.750 element at address: 0x20000b200000 with size: 0.489258 MiB 00:09:09.750 element at address: 0x200000800000 with size: 0.486877 MiB 00:09:09.750 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:09.750 element at address: 0x200027e00000 with size: 0.398132 MiB 00:09:09.750 element at address: 0x200003a00000 with size: 0.351501 MiB 00:09:09.750 list of standard malloc elements. size: 199.251770 MiB 00:09:09.750 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:09.750 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:09.750 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:09.750 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:09.750 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:09.750 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:09.751 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:09.751 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:09.751 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:09.751 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:09.751 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:09:09.751 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:09.752 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:09.752 list of memzone associated elements. size: 602.262573 MiB 00:09:09.752 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:09.752 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:09.752 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:09.752 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:09.752 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:09.752 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56739_0 00:09:09.752 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:09.752 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56739_0 00:09:09.752 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:09.752 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56739_0 00:09:09.752 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:09.752 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:09.752 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:09.752 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:09.752 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:09.752 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56739 00:09:09.752 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:09.752 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56739 00:09:09.752 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:09.752 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56739 00:09:09.752 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:09.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:09.752 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:09.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:09.752 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:09.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:09.752 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:09.752 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:09.752 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:09.752 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56739 00:09:09.752 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:09.752 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56739 00:09:09.752 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:09.752 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56739 00:09:09.752 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:09.752 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56739 00:09:09.752 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:09.752 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56739 00:09:09.752 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:09.752 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:09.752 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:09.752 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:09.752 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:09.752 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:09.752 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:09.752 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56739 00:09:09.752 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:09.752 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:09.752 element at address: 0x200027e66040 with size: 0.023743 MiB 00:09:09.752 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:09.752 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:09.752 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56739 00:09:09.752 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:09:09.752 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:09.752 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:09.752 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56739 00:09:09.752 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:09.752 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56739 00:09:09.752 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:09:09.752 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:09.752 11:36:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:09.752 11:36:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56739 00:09:09.752 11:36:42 -- common/autotest_common.sh@936 -- # '[' -z 56739 ']' 00:09:09.752 11:36:42 -- common/autotest_common.sh@940 -- # kill -0 56739 00:09:09.752 11:36:42 -- common/autotest_common.sh@941 -- # uname 00:09:09.752 11:36:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:09.752 11:36:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56739 00:09:09.752 11:36:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:09.752 11:36:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:09.752 killing process with pid 56739 00:09:09.752 11:36:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56739' 00:09:09.752 11:36:42 -- common/autotest_common.sh@955 -- # kill 56739 00:09:09.752 11:36:42 -- common/autotest_common.sh@960 -- # wait 56739 00:09:10.012 00:09:10.012 real 0m1.707s 00:09:10.012 user 0m1.794s 00:09:10.012 sys 0m0.432s 00:09:10.012 11:36:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.012 11:36:42 -- common/autotest_common.sh@10 -- # set +x 00:09:10.012 ************************************ 00:09:10.012 END TEST dpdk_mem_utility 00:09:10.012 ************************************ 00:09:10.012 11:36:43 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:10.012 11:36:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.012 11:36:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.012 11:36:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.012 ************************************ 00:09:10.012 START TEST event 00:09:10.012 ************************************ 00:09:10.012 11:36:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:10.271 * Looking for test storage... 00:09:10.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:10.271 11:36:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:10.271 11:36:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:10.271 11:36:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:10.271 11:36:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:10.271 11:36:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:10.271 11:36:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:10.271 11:36:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:10.271 11:36:43 -- scripts/common.sh@335 -- # IFS=.-: 00:09:10.271 11:36:43 -- scripts/common.sh@335 -- # read -ra ver1 00:09:10.271 11:36:43 -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.271 11:36:43 -- scripts/common.sh@336 -- # read -ra ver2 00:09:10.271 11:36:43 -- scripts/common.sh@337 -- # local 'op=<' 00:09:10.271 11:36:43 -- scripts/common.sh@339 -- # ver1_l=2 00:09:10.271 11:36:43 -- scripts/common.sh@340 -- # ver2_l=1 00:09:10.271 11:36:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:10.271 11:36:43 -- scripts/common.sh@343 -- # case "$op" in 00:09:10.271 11:36:43 -- scripts/common.sh@344 -- # : 1 00:09:10.271 11:36:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:10.271 11:36:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.271 11:36:43 -- scripts/common.sh@364 -- # decimal 1 00:09:10.271 11:36:43 -- scripts/common.sh@352 -- # local d=1 00:09:10.271 11:36:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.271 11:36:43 -- scripts/common.sh@354 -- # echo 1 00:09:10.271 11:36:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:10.271 11:36:43 -- scripts/common.sh@365 -- # decimal 2 00:09:10.271 11:36:43 -- scripts/common.sh@352 -- # local d=2 00:09:10.271 11:36:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.271 11:36:43 -- scripts/common.sh@354 -- # echo 2 00:09:10.271 11:36:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:10.271 11:36:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:10.271 11:36:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:10.271 11:36:43 -- scripts/common.sh@367 -- # return 0 00:09:10.271 11:36:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.271 11:36:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:10.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.271 --rc genhtml_branch_coverage=1 00:09:10.271 --rc genhtml_function_coverage=1 00:09:10.271 --rc genhtml_legend=1 00:09:10.271 --rc geninfo_all_blocks=1 00:09:10.271 --rc geninfo_unexecuted_blocks=1 00:09:10.271 00:09:10.271 ' 00:09:10.271 11:36:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:10.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.271 --rc genhtml_branch_coverage=1 00:09:10.271 --rc genhtml_function_coverage=1 00:09:10.271 --rc genhtml_legend=1 00:09:10.271 --rc geninfo_all_blocks=1 00:09:10.271 --rc geninfo_unexecuted_blocks=1 00:09:10.271 00:09:10.271 ' 00:09:10.271 11:36:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:10.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.271 --rc genhtml_branch_coverage=1 00:09:10.271 --rc genhtml_function_coverage=1 00:09:10.271 --rc genhtml_legend=1 00:09:10.271 --rc geninfo_all_blocks=1 00:09:10.271 --rc geninfo_unexecuted_blocks=1 00:09:10.271 00:09:10.271 ' 00:09:10.271 11:36:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:10.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.272 --rc genhtml_branch_coverage=1 00:09:10.272 --rc genhtml_function_coverage=1 00:09:10.272 --rc genhtml_legend=1 00:09:10.272 --rc geninfo_all_blocks=1 00:09:10.272 --rc geninfo_unexecuted_blocks=1 00:09:10.272 00:09:10.272 ' 00:09:10.272 11:36:43 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:10.272 11:36:43 -- bdev/nbd_common.sh@6 -- # set -e 00:09:10.272 11:36:43 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:10.272 11:36:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:10.272 11:36:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.272 11:36:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 ************************************ 00:09:10.272 START TEST event_perf 00:09:10.272 ************************************ 00:09:10.272 11:36:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:10.272 Running I/O for 1 seconds...[2024-11-20 11:36:43.283400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:10.272 [2024-11-20 11:36:43.283535] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56836 ] 00:09:10.531 [2024-11-20 11:36:43.427156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.531 [2024-11-20 11:36:43.534863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.531 [2024-11-20 11:36:43.534992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.531 Running I/O for 1 seconds...[2024-11-20 11:36:43.535306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.531 [2024-11-20 11:36:43.535334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.911 00:09:11.911 lcore 0: 187469 00:09:11.911 lcore 1: 187469 00:09:11.911 lcore 2: 187470 00:09:11.911 lcore 3: 187469 00:09:11.911 done. 00:09:11.911 00:09:11.911 real 0m1.384s 00:09:11.911 user 0m4.202s 00:09:11.911 sys 0m0.060s 00:09:11.911 11:36:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.911 11:36:44 -- common/autotest_common.sh@10 -- # set +x 00:09:11.911 ************************************ 00:09:11.911 END TEST event_perf 00:09:11.911 ************************************ 00:09:11.911 11:36:44 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:11.911 11:36:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:11.911 11:36:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.911 11:36:44 -- common/autotest_common.sh@10 -- # set +x 00:09:11.911 ************************************ 00:09:11.911 START TEST event_reactor 00:09:11.911 ************************************ 00:09:11.911 11:36:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:11.911 [2024-11-20 11:36:44.731580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.911 [2024-11-20 11:36:44.731688] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56875 ] 00:09:11.911 [2024-11-20 11:36:44.873200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.170 [2024-11-20 11:36:44.973997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.110 test_start 00:09:13.110 oneshot 00:09:13.110 tick 100 00:09:13.110 tick 100 00:09:13.110 tick 250 00:09:13.110 tick 100 00:09:13.110 tick 100 00:09:13.110 tick 250 00:09:13.110 tick 500 00:09:13.110 tick 100 00:09:13.110 tick 100 00:09:13.110 tick 100 00:09:13.110 tick 250 00:09:13.110 tick 100 00:09:13.110 tick 100 00:09:13.110 test_end 00:09:13.110 00:09:13.110 real 0m1.371s 00:09:13.110 user 0m1.214s 00:09:13.110 sys 0m0.051s 00:09:13.110 11:36:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.110 11:36:46 -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 ************************************ 00:09:13.110 END TEST event_reactor 00:09:13.110 ************************************ 00:09:13.110 11:36:46 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:13.110 11:36:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:13.110 11:36:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.110 11:36:46 -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 ************************************ 00:09:13.110 START TEST event_reactor_perf 00:09:13.110 ************************************ 00:09:13.110 11:36:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:13.370 [2024-11-20 11:36:46.161522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.370 [2024-11-20 11:36:46.161613] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56910 ] 00:09:13.370 [2024-11-20 11:36:46.303740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.370 [2024-11-20 11:36:46.402981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.750 test_start 00:09:14.750 test_end 00:09:14.750 Performance: 450015 events per second 00:09:14.750 00:09:14.750 real 0m1.368s 00:09:14.750 user 0m1.211s 00:09:14.750 sys 0m0.051s 00:09:14.750 11:36:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.750 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.750 ************************************ 00:09:14.750 END TEST event_reactor_perf 00:09:14.751 ************************************ 00:09:14.751 11:36:47 -- event/event.sh@49 -- # uname -s 00:09:14.751 11:36:47 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:14.751 11:36:47 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:14.751 11:36:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.751 11:36:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.751 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.751 ************************************ 00:09:14.751 START TEST event_scheduler 00:09:14.751 ************************************ 00:09:14.751 11:36:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:14.751 * Looking for test storage... 00:09:14.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:14.751 11:36:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:14.751 11:36:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:14.751 11:36:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:14.751 11:36:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:14.751 11:36:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:14.751 11:36:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:14.751 11:36:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:14.751 11:36:47 -- scripts/common.sh@335 -- # IFS=.-: 00:09:14.751 11:36:47 -- scripts/common.sh@335 -- # read -ra ver1 00:09:14.751 11:36:47 -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.751 11:36:47 -- scripts/common.sh@336 -- # read -ra ver2 00:09:14.751 11:36:47 -- scripts/common.sh@337 -- # local 'op=<' 00:09:14.751 11:36:47 -- scripts/common.sh@339 -- # ver1_l=2 00:09:14.751 11:36:47 -- scripts/common.sh@340 -- # ver2_l=1 00:09:14.751 11:36:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:14.751 11:36:47 -- scripts/common.sh@343 -- # case "$op" in 00:09:14.751 11:36:47 -- scripts/common.sh@344 -- # : 1 00:09:14.751 11:36:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:14.751 11:36:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.751 11:36:47 -- scripts/common.sh@364 -- # decimal 1 00:09:14.751 11:36:47 -- scripts/common.sh@352 -- # local d=1 00:09:14.751 11:36:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.751 11:36:47 -- scripts/common.sh@354 -- # echo 1 00:09:14.751 11:36:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:14.751 11:36:47 -- scripts/common.sh@365 -- # decimal 2 00:09:15.010 11:36:47 -- scripts/common.sh@352 -- # local d=2 00:09:15.011 11:36:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.011 11:36:47 -- scripts/common.sh@354 -- # echo 2 00:09:15.011 11:36:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:15.011 11:36:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:15.011 11:36:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:15.011 11:36:47 -- scripts/common.sh@367 -- # return 0 00:09:15.011 11:36:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.011 11:36:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.011 --rc genhtml_branch_coverage=1 00:09:15.011 --rc genhtml_function_coverage=1 00:09:15.011 --rc genhtml_legend=1 00:09:15.011 --rc geninfo_all_blocks=1 00:09:15.011 --rc geninfo_unexecuted_blocks=1 00:09:15.011 00:09:15.011 ' 00:09:15.011 11:36:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.011 --rc genhtml_branch_coverage=1 00:09:15.011 --rc genhtml_function_coverage=1 00:09:15.011 --rc genhtml_legend=1 00:09:15.011 --rc geninfo_all_blocks=1 00:09:15.011 --rc geninfo_unexecuted_blocks=1 00:09:15.011 00:09:15.011 ' 00:09:15.011 11:36:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.011 --rc genhtml_branch_coverage=1 00:09:15.011 --rc genhtml_function_coverage=1 00:09:15.011 --rc genhtml_legend=1 00:09:15.011 --rc geninfo_all_blocks=1 00:09:15.011 --rc geninfo_unexecuted_blocks=1 00:09:15.011 00:09:15.011 ' 00:09:15.011 11:36:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.011 --rc genhtml_branch_coverage=1 00:09:15.011 --rc genhtml_function_coverage=1 00:09:15.011 --rc genhtml_legend=1 00:09:15.011 --rc geninfo_all_blocks=1 00:09:15.011 --rc geninfo_unexecuted_blocks=1 00:09:15.011 00:09:15.011 ' 00:09:15.011 11:36:47 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:15.011 11:36:47 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:15.011 11:36:47 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56979 00:09:15.011 11:36:47 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.011 11:36:47 -- scheduler/scheduler.sh@37 -- # waitforlisten 56979 00:09:15.011 11:36:47 -- common/autotest_common.sh@829 -- # '[' -z 56979 ']' 00:09:15.011 11:36:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.011 11:36:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.011 11:36:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.011 11:36:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.011 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:09:15.011 [2024-11-20 11:36:47.835347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.011 [2024-11-20 11:36:47.835409] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56979 ] 00:09:15.011 [2024-11-20 11:36:47.962408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.270 [2024-11-20 11:36:48.069372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.270 [2024-11-20 11:36:48.069567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.270 [2024-11-20 11:36:48.069784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.270 [2024-11-20 11:36:48.069788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.867 11:36:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.867 11:36:48 -- common/autotest_common.sh@862 -- # return 0 00:09:15.867 11:36:48 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:15.867 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.867 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:15.867 POWER: Env isn't set yet! 00:09:15.868 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:15.868 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:15.868 POWER: Cannot set governor of lcore 0 to userspace 00:09:15.868 POWER: Attempting to initialise PSTAT power management... 00:09:15.868 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:15.868 POWER: Cannot set governor of lcore 0 to performance 00:09:15.868 POWER: Attempting to initialise AMD PSTATE power management... 00:09:15.868 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:15.868 POWER: Cannot set governor of lcore 0 to userspace 00:09:15.868 POWER: Attempting to initialise CPPC power management... 00:09:15.868 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:15.868 POWER: Cannot set governor of lcore 0 to userspace 00:09:15.868 POWER: Attempting to initialise VM power management... 00:09:15.868 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:15.868 POWER: Unable to set Power Management Environment for lcore 0 00:09:15.868 [2024-11-20 11:36:48.781507] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:15.868 [2024-11-20 11:36:48.781520] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:15.868 [2024-11-20 11:36:48.781528] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:15.868 [2024-11-20 11:36:48.781541] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:15.868 [2024-11-20 11:36:48.781548] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:15.868 [2024-11-20 11:36:48.781568] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:15.868 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.868 11:36:48 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:15.868 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.868 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:15.868 [2024-11-20 11:36:48.857885] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:15.868 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.868 11:36:48 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:15.868 11:36:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.868 11:36:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.868 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:15.868 ************************************ 00:09:15.868 START TEST scheduler_create_thread 00:09:15.868 ************************************ 00:09:15.868 11:36:48 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:09:15.868 11:36:48 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:15.868 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.868 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:15.868 2 00:09:15.868 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.868 11:36:48 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:15.868 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.868 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:15.868 3 00:09:15.868 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.868 11:36:48 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:15.868 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.868 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.126 4 00:09:16.126 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.126 11:36:48 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:16.127 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.127 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.127 5 00:09:16.127 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.127 11:36:48 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:16.127 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.127 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.127 6 00:09:16.127 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.127 11:36:48 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:16.127 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.127 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.127 7 00:09:16.127 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.127 11:36:48 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:16.127 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.127 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.127 8 00:09:16.127 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.127 11:36:48 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:16.127 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.127 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.127 9 00:09:16.127 11:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.127 11:36:48 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:16.127 11:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.127 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:09:16.385 10 00:09:16.385 11:36:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.385 11:36:49 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:16.385 11:36:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.385 11:36:49 -- common/autotest_common.sh@10 -- # set +x 00:09:17.762 11:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.762 11:36:50 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:17.762 11:36:50 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:17.762 11:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.762 11:36:50 -- common/autotest_common.sh@10 -- # set +x 00:09:18.701 11:36:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.701 11:36:51 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:18.701 11:36:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.701 11:36:51 -- common/autotest_common.sh@10 -- # set +x 00:09:19.640 11:36:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.640 11:36:52 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:19.640 11:36:52 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:19.640 11:36:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.640 11:36:52 -- common/autotest_common.sh@10 -- # set +x 00:09:20.206 11:36:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.206 00:09:20.206 real 0m4.211s 00:09:20.206 user 0m0.028s 00:09:20.206 sys 0m0.007s 00:09:20.206 11:36:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.206 11:36:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.207 ************************************ 00:09:20.207 END TEST scheduler_create_thread 00:09:20.207 ************************************ 00:09:20.207 11:36:53 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:20.207 11:36:53 -- scheduler/scheduler.sh@46 -- # killprocess 56979 00:09:20.207 11:36:53 -- common/autotest_common.sh@936 -- # '[' -z 56979 ']' 00:09:20.207 11:36:53 -- common/autotest_common.sh@940 -- # kill -0 56979 00:09:20.207 11:36:53 -- common/autotest_common.sh@941 -- # uname 00:09:20.207 11:36:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:20.207 11:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56979 00:09:20.207 11:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:20.207 11:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:20.207 11:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56979' 00:09:20.207 killing process with pid 56979 00:09:20.207 11:36:53 -- common/autotest_common.sh@955 -- # kill 56979 00:09:20.207 11:36:53 -- common/autotest_common.sh@960 -- # wait 56979 00:09:20.465 [2024-11-20 11:36:53.461746] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:20.724 00:09:20.724 real 0m6.167s 00:09:20.724 user 0m14.158s 00:09:20.724 sys 0m0.440s 00:09:20.724 11:36:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.724 11:36:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.724 ************************************ 00:09:20.724 END TEST event_scheduler 00:09:20.724 ************************************ 00:09:20.982 11:36:53 -- event/event.sh@51 -- # modprobe -n nbd 00:09:20.982 11:36:53 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:20.982 11:36:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.982 11:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.982 11:36:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.982 ************************************ 00:09:20.982 START TEST app_repeat 00:09:20.982 ************************************ 00:09:20.982 11:36:53 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:09:20.982 11:36:53 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.982 11:36:53 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.982 11:36:53 -- event/event.sh@13 -- # local nbd_list 00:09:20.982 11:36:53 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.982 11:36:53 -- event/event.sh@14 -- # local bdev_list 00:09:20.982 11:36:53 -- event/event.sh@15 -- # local repeat_times=4 00:09:20.982 11:36:53 -- event/event.sh@17 -- # modprobe nbd 00:09:20.982 11:36:53 -- event/event.sh@19 -- # repeat_pid=57113 00:09:20.982 11:36:53 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.982 11:36:53 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:20.982 Process app_repeat pid: 57113 00:09:20.982 11:36:53 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57113' 00:09:20.982 11:36:53 -- event/event.sh@23 -- # for i in {0..2} 00:09:20.982 spdk_app_start Round 0 00:09:20.982 11:36:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:20.982 11:36:53 -- event/event.sh@25 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:09:20.982 11:36:53 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:09:20.982 11:36:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:20.982 11:36:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:20.982 11:36:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:20.982 11:36:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.982 11:36:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.982 [2024-11-20 11:36:53.859099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.982 [2024-11-20 11:36:53.859192] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57113 ] 00:09:20.982 [2024-11-20 11:36:53.997008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.242 [2024-11-20 11:36:54.097710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.242 [2024-11-20 11:36:54.097710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.809 11:36:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.809 11:36:54 -- common/autotest_common.sh@862 -- # return 0 00:09:21.809 11:36:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.068 Malloc0 00:09:22.068 11:36:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.327 Malloc1 00:09:22.327 11:36:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@12 -- # local i 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.327 11:36:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:22.586 /dev/nbd0 00:09:22.586 11:36:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:22.586 11:36:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:22.586 11:36:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:22.586 11:36:55 -- common/autotest_common.sh@867 -- # local i 00:09:22.586 11:36:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:22.586 11:36:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:22.586 11:36:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:22.586 11:36:55 -- common/autotest_common.sh@871 -- # break 00:09:22.586 11:36:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:22.586 11:36:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:22.586 11:36:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.586 1+0 records in 00:09:22.586 1+0 records out 00:09:22.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218545 s, 18.7 MB/s 00:09:22.586 11:36:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.586 11:36:55 -- common/autotest_common.sh@884 -- # size=4096 00:09:22.586 11:36:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.586 11:36:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:22.586 11:36:55 -- common/autotest_common.sh@887 -- # return 0 00:09:22.586 11:36:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.586 11:36:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.586 11:36:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:22.845 /dev/nbd1 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:22.845 11:36:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:22.845 11:36:55 -- common/autotest_common.sh@867 -- # local i 00:09:22.845 11:36:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:22.845 11:36:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:22.845 11:36:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:22.845 11:36:55 -- common/autotest_common.sh@871 -- # break 00:09:22.845 11:36:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:22.845 11:36:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:22.845 11:36:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.845 1+0 records in 00:09:22.845 1+0 records out 00:09:22.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211601 s, 19.4 MB/s 00:09:22.845 11:36:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.845 11:36:55 -- common/autotest_common.sh@884 -- # size=4096 00:09:22.845 11:36:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.845 11:36:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:22.845 11:36:55 -- common/autotest_common.sh@887 -- # return 0 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.845 11:36:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:23.105 { 00:09:23.105 "bdev_name": "Malloc0", 00:09:23.105 "nbd_device": "/dev/nbd0" 00:09:23.105 }, 00:09:23.105 { 00:09:23.105 "bdev_name": "Malloc1", 00:09:23.105 "nbd_device": "/dev/nbd1" 00:09:23.105 } 00:09:23.105 ]' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:23.105 { 00:09:23.105 "bdev_name": "Malloc0", 00:09:23.105 "nbd_device": "/dev/nbd0" 00:09:23.105 }, 00:09:23.105 { 00:09:23.105 "bdev_name": "Malloc1", 00:09:23.105 "nbd_device": "/dev/nbd1" 00:09:23.105 } 00:09:23.105 ]' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.105 /dev/nbd1' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.105 /dev/nbd1' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@65 -- # count=2 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@95 -- # count=2 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:23.105 256+0 records in 00:09:23.105 256+0 records out 00:09:23.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134102 s, 78.2 MB/s 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.105 256+0 records in 00:09:23.105 256+0 records out 00:09:23.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236712 s, 44.3 MB/s 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.105 11:36:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:23.365 256+0 records in 00:09:23.365 256+0 records out 00:09:23.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233642 s, 44.9 MB/s 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@51 -- # local i 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.365 11:36:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@41 -- # break 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.624 11:36:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@41 -- # break 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:23.884 11:36:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@65 -- # true 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@65 -- # count=0 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@104 -- # count=0 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:24.144 11:36:56 -- bdev/nbd_common.sh@109 -- # return 0 00:09:24.144 11:36:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:24.404 11:36:57 -- event/event.sh@35 -- # sleep 3 00:09:24.404 [2024-11-20 11:36:57.402612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.663 [2024-11-20 11:36:57.501893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.663 [2024-11-20 11:36:57.501894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.663 [2024-11-20 11:36:57.543646] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:24.663 [2024-11-20 11:36:57.543698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:27.200 11:37:00 -- event/event.sh@23 -- # for i in {0..2} 00:09:27.200 spdk_app_start Round 1 00:09:27.200 11:37:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:27.200 11:37:00 -- event/event.sh@25 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:09:27.200 11:37:00 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:09:27.200 11:37:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:27.200 11:37:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:27.200 11:37:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:27.200 11:37:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.200 11:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:27.461 11:37:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.461 11:37:00 -- common/autotest_common.sh@862 -- # return 0 00:09:27.461 11:37:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.720 Malloc0 00:09:27.720 11:37:00 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.980 Malloc1 00:09:27.980 11:37:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@12 -- # local i 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.980 11:37:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:28.239 /dev/nbd0 00:09:28.239 11:37:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:28.239 11:37:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:28.239 11:37:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:28.239 11:37:01 -- common/autotest_common.sh@867 -- # local i 00:09:28.239 11:37:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:28.239 11:37:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:28.239 11:37:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:28.239 11:37:01 -- common/autotest_common.sh@871 -- # break 00:09:28.239 11:37:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:28.239 11:37:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:28.239 11:37:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.239 1+0 records in 00:09:28.239 1+0 records out 00:09:28.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337302 s, 12.1 MB/s 00:09:28.239 11:37:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.239 11:37:01 -- common/autotest_common.sh@884 -- # size=4096 00:09:28.239 11:37:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.239 11:37:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:28.239 11:37:01 -- common/autotest_common.sh@887 -- # return 0 00:09:28.239 11:37:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.239 11:37:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.239 11:37:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:28.498 /dev/nbd1 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:28.498 11:37:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:28.498 11:37:01 -- common/autotest_common.sh@867 -- # local i 00:09:28.498 11:37:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:28.498 11:37:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:28.498 11:37:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:28.498 11:37:01 -- common/autotest_common.sh@871 -- # break 00:09:28.498 11:37:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:28.498 11:37:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:28.498 11:37:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.498 1+0 records in 00:09:28.498 1+0 records out 00:09:28.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261037 s, 15.7 MB/s 00:09:28.498 11:37:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.498 11:37:01 -- common/autotest_common.sh@884 -- # size=4096 00:09:28.498 11:37:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.498 11:37:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:28.498 11:37:01 -- common/autotest_common.sh@887 -- # return 0 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.498 11:37:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:28.756 { 00:09:28.756 "bdev_name": "Malloc0", 00:09:28.756 "nbd_device": "/dev/nbd0" 00:09:28.756 }, 00:09:28.756 { 00:09:28.756 "bdev_name": "Malloc1", 00:09:28.756 "nbd_device": "/dev/nbd1" 00:09:28.756 } 00:09:28.756 ]' 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:28.756 { 00:09:28.756 "bdev_name": "Malloc0", 00:09:28.756 "nbd_device": "/dev/nbd0" 00:09:28.756 }, 00:09:28.756 { 00:09:28.756 "bdev_name": "Malloc1", 00:09:28.756 "nbd_device": "/dev/nbd1" 00:09:28.756 } 00:09:28.756 ]' 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:28.756 /dev/nbd1' 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:28.756 /dev/nbd1' 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@65 -- # count=2 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@95 -- # count=2 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.756 11:37:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.757 11:37:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:28.757 11:37:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.757 11:37:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:28.757 11:37:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:28.757 256+0 records in 00:09:28.757 256+0 records out 00:09:28.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128635 s, 81.5 MB/s 00:09:28.757 11:37:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.757 11:37:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:29.067 256+0 records in 00:09:29.067 256+0 records out 00:09:29.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249682 s, 42.0 MB/s 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:29.067 256+0 records in 00:09:29.067 256+0 records out 00:09:29.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285664 s, 36.7 MB/s 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@51 -- # local i 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.067 11:37:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@41 -- # break 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.327 11:37:02 -- bdev/nbd_common.sh@41 -- # break 00:09:29.587 11:37:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.587 11:37:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.587 11:37:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.587 11:37:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@65 -- # true 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@104 -- # count=0 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:29.848 11:37:02 -- bdev/nbd_common.sh@109 -- # return 0 00:09:29.848 11:37:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:30.107 11:37:02 -- event/event.sh@35 -- # sleep 3 00:09:30.366 [2024-11-20 11:37:03.175027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.366 [2024-11-20 11:37:03.268430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.366 [2024-11-20 11:37:03.268433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.366 [2024-11-20 11:37:03.311138] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:30.366 [2024-11-20 11:37:03.311185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:33.656 11:37:05 -- event/event.sh@23 -- # for i in {0..2} 00:09:33.656 spdk_app_start Round 2 00:09:33.656 11:37:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:33.656 11:37:05 -- event/event.sh@25 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:09:33.656 11:37:05 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:09:33.656 11:37:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:33.656 11:37:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:33.656 11:37:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:33.656 11:37:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.656 11:37:05 -- common/autotest_common.sh@10 -- # set +x 00:09:33.656 11:37:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.656 11:37:06 -- common/autotest_common.sh@862 -- # return 0 00:09:33.656 11:37:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:33.656 Malloc0 00:09:33.656 11:37:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:33.915 Malloc1 00:09:33.915 11:37:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@12 -- # local i 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:33.915 /dev/nbd0 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:33.915 11:37:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:33.915 11:37:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:33.915 11:37:06 -- common/autotest_common.sh@867 -- # local i 00:09:33.915 11:37:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:33.915 11:37:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:33.915 11:37:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:33.915 11:37:06 -- common/autotest_common.sh@871 -- # break 00:09:33.915 11:37:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:33.915 11:37:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:33.916 11:37:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.916 1+0 records in 00:09:33.916 1+0 records out 00:09:33.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270152 s, 15.2 MB/s 00:09:33.916 11:37:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.916 11:37:06 -- common/autotest_common.sh@884 -- # size=4096 00:09:33.916 11:37:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:34.175 11:37:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.175 11:37:06 -- common/autotest_common.sh@887 -- # return 0 00:09:34.175 11:37:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.175 11:37:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:34.175 11:37:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:34.175 /dev/nbd1 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:34.175 11:37:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:34.175 11:37:07 -- common/autotest_common.sh@867 -- # local i 00:09:34.175 11:37:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.175 11:37:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.175 11:37:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:34.175 11:37:07 -- common/autotest_common.sh@871 -- # break 00:09:34.175 11:37:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.175 11:37:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.175 11:37:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:34.175 1+0 records in 00:09:34.175 1+0 records out 00:09:34.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332752 s, 12.3 MB/s 00:09:34.175 11:37:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:34.175 11:37:07 -- common/autotest_common.sh@884 -- # size=4096 00:09:34.175 11:37:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:34.175 11:37:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.175 11:37:07 -- common/autotest_common.sh@887 -- # return 0 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.175 11:37:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.434 11:37:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:34.434 { 00:09:34.434 "bdev_name": "Malloc0", 00:09:34.434 "nbd_device": "/dev/nbd0" 00:09:34.434 }, 00:09:34.434 { 00:09:34.434 "bdev_name": "Malloc1", 00:09:34.434 "nbd_device": "/dev/nbd1" 00:09:34.434 } 00:09:34.434 ]' 00:09:34.434 11:37:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:34.434 { 00:09:34.434 "bdev_name": "Malloc0", 00:09:34.434 "nbd_device": "/dev/nbd0" 00:09:34.434 }, 00:09:34.434 { 00:09:34.434 "bdev_name": "Malloc1", 00:09:34.434 "nbd_device": "/dev/nbd1" 00:09:34.434 } 00:09:34.434 ]' 00:09:34.434 11:37:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:34.694 /dev/nbd1' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:34.694 /dev/nbd1' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@65 -- # count=2 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@95 -- # count=2 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:34.694 256+0 records in 00:09:34.694 256+0 records out 00:09:34.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00570806 s, 184 MB/s 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:34.694 256+0 records in 00:09:34.694 256+0 records out 00:09:34.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189866 s, 55.2 MB/s 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:34.694 256+0 records in 00:09:34.694 256+0 records out 00:09:34.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202759 s, 51.7 MB/s 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@51 -- # local i 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.694 11:37:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@41 -- # break 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.953 11:37:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@41 -- # break 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.212 11:37:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@65 -- # true 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@104 -- # count=0 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:35.472 11:37:08 -- bdev/nbd_common.sh@109 -- # return 0 00:09:35.472 11:37:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:35.732 11:37:08 -- event/event.sh@35 -- # sleep 3 00:09:35.991 [2024-11-20 11:37:08.821827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.991 [2024-11-20 11:37:08.921258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.991 [2024-11-20 11:37:08.921259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.991 [2024-11-20 11:37:08.963669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:35.991 [2024-11-20 11:37:08.963725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:38.606 11:37:11 -- event/event.sh@38 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:09:38.606 11:37:11 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:09:38.606 11:37:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:38.606 11:37:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:38.606 11:37:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:38.606 11:37:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.606 11:37:11 -- common/autotest_common.sh@10 -- # set +x 00:09:38.865 11:37:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.865 11:37:11 -- common/autotest_common.sh@862 -- # return 0 00:09:38.865 11:37:11 -- event/event.sh@39 -- # killprocess 57113 00:09:38.865 11:37:11 -- common/autotest_common.sh@936 -- # '[' -z 57113 ']' 00:09:38.865 11:37:11 -- common/autotest_common.sh@940 -- # kill -0 57113 00:09:38.865 11:37:11 -- common/autotest_common.sh@941 -- # uname 00:09:38.865 11:37:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:38.865 11:37:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57113 00:09:39.123 11:37:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:39.123 11:37:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:39.123 killing process with pid 57113 00:09:39.123 11:37:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57113' 00:09:39.123 11:37:11 -- common/autotest_common.sh@955 -- # kill 57113 00:09:39.123 11:37:11 -- common/autotest_common.sh@960 -- # wait 57113 00:09:39.123 spdk_app_start is called in Round 0. 00:09:39.123 Shutdown signal received, stop current app iteration 00:09:39.123 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:39.123 spdk_app_start is called in Round 1. 00:09:39.123 Shutdown signal received, stop current app iteration 00:09:39.123 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:39.123 spdk_app_start is called in Round 2. 00:09:39.123 Shutdown signal received, stop current app iteration 00:09:39.123 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:39.123 spdk_app_start is called in Round 3. 00:09:39.123 Shutdown signal received, stop current app iteration 00:09:39.123 11:37:12 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:39.123 11:37:12 -- event/event.sh@42 -- # return 0 00:09:39.123 00:09:39.123 real 0m18.307s 00:09:39.123 user 0m40.421s 00:09:39.123 sys 0m3.012s 00:09:39.123 11:37:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:39.123 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:09:39.123 ************************************ 00:09:39.123 END TEST app_repeat 00:09:39.123 ************************************ 00:09:39.381 11:37:12 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:39.381 11:37:12 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:39.381 11:37:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.381 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:09:39.381 ************************************ 00:09:39.381 START TEST cpu_locks 00:09:39.381 ************************************ 00:09:39.381 11:37:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:39.381 * Looking for test storage... 00:09:39.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:39.381 11:37:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:39.381 11:37:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:39.381 11:37:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:39.381 11:37:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:39.381 11:37:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:39.381 11:37:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:39.381 11:37:12 -- scripts/common.sh@335 -- # IFS=.-: 00:09:39.381 11:37:12 -- scripts/common.sh@335 -- # read -ra ver1 00:09:39.381 11:37:12 -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.381 11:37:12 -- scripts/common.sh@336 -- # read -ra ver2 00:09:39.381 11:37:12 -- scripts/common.sh@337 -- # local 'op=<' 00:09:39.381 11:37:12 -- scripts/common.sh@339 -- # ver1_l=2 00:09:39.381 11:37:12 -- scripts/common.sh@340 -- # ver2_l=1 00:09:39.381 11:37:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:39.381 11:37:12 -- scripts/common.sh@343 -- # case "$op" in 00:09:39.381 11:37:12 -- scripts/common.sh@344 -- # : 1 00:09:39.381 11:37:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:39.381 11:37:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.381 11:37:12 -- scripts/common.sh@364 -- # decimal 1 00:09:39.381 11:37:12 -- scripts/common.sh@352 -- # local d=1 00:09:39.381 11:37:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.381 11:37:12 -- scripts/common.sh@354 -- # echo 1 00:09:39.381 11:37:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:39.381 11:37:12 -- scripts/common.sh@365 -- # decimal 2 00:09:39.381 11:37:12 -- scripts/common.sh@352 -- # local d=2 00:09:39.381 11:37:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.381 11:37:12 -- scripts/common.sh@354 -- # echo 2 00:09:39.381 11:37:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:39.381 11:37:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:39.381 11:37:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:39.381 11:37:12 -- scripts/common.sh@367 -- # return 0 00:09:39.381 11:37:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.381 --rc genhtml_branch_coverage=1 00:09:39.381 --rc genhtml_function_coverage=1 00:09:39.381 --rc genhtml_legend=1 00:09:39.381 --rc geninfo_all_blocks=1 00:09:39.381 --rc geninfo_unexecuted_blocks=1 00:09:39.381 00:09:39.381 ' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.381 --rc genhtml_branch_coverage=1 00:09:39.381 --rc genhtml_function_coverage=1 00:09:39.381 --rc genhtml_legend=1 00:09:39.381 --rc geninfo_all_blocks=1 00:09:39.381 --rc geninfo_unexecuted_blocks=1 00:09:39.381 00:09:39.381 ' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.381 --rc genhtml_branch_coverage=1 00:09:39.381 --rc genhtml_function_coverage=1 00:09:39.381 --rc genhtml_legend=1 00:09:39.381 --rc geninfo_all_blocks=1 00:09:39.381 --rc geninfo_unexecuted_blocks=1 00:09:39.381 00:09:39.381 ' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.381 --rc genhtml_branch_coverage=1 00:09:39.381 --rc genhtml_function_coverage=1 00:09:39.381 --rc genhtml_legend=1 00:09:39.381 --rc geninfo_all_blocks=1 00:09:39.381 --rc geninfo_unexecuted_blocks=1 00:09:39.381 00:09:39.381 ' 00:09:39.381 11:37:12 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:39.381 11:37:12 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:39.381 11:37:12 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:39.381 11:37:12 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:39.381 11:37:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:39.381 11:37:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.381 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 ************************************ 00:09:39.639 START TEST default_locks 00:09:39.639 ************************************ 00:09:39.639 11:37:12 -- common/autotest_common.sh@1114 -- # default_locks 00:09:39.639 11:37:12 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57741 00:09:39.639 11:37:12 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:39.639 11:37:12 -- event/cpu_locks.sh@47 -- # waitforlisten 57741 00:09:39.639 11:37:12 -- common/autotest_common.sh@829 -- # '[' -z 57741 ']' 00:09:39.639 11:37:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.639 11:37:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.639 11:37:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.639 11:37:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.639 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 [2024-11-20 11:37:12.481017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.639 [2024-11-20 11:37:12.481090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57741 ] 00:09:39.639 [2024-11-20 11:37:12.618646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.897 [2024-11-20 11:37:12.719143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.897 [2024-11-20 11:37:12.719288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.462 11:37:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.462 11:37:13 -- common/autotest_common.sh@862 -- # return 0 00:09:40.462 11:37:13 -- event/cpu_locks.sh@49 -- # locks_exist 57741 00:09:40.462 11:37:13 -- event/cpu_locks.sh@22 -- # lslocks -p 57741 00:09:40.462 11:37:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:40.720 11:37:13 -- event/cpu_locks.sh@50 -- # killprocess 57741 00:09:40.720 11:37:13 -- common/autotest_common.sh@936 -- # '[' -z 57741 ']' 00:09:40.720 11:37:13 -- common/autotest_common.sh@940 -- # kill -0 57741 00:09:40.720 11:37:13 -- common/autotest_common.sh@941 -- # uname 00:09:40.720 11:37:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.720 11:37:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57741 00:09:40.720 11:37:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:40.720 11:37:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:40.720 killing process with pid 57741 00:09:40.720 11:37:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57741' 00:09:40.720 11:37:13 -- common/autotest_common.sh@955 -- # kill 57741 00:09:40.720 11:37:13 -- common/autotest_common.sh@960 -- # wait 57741 00:09:40.978 11:37:14 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57741 00:09:40.978 11:37:14 -- common/autotest_common.sh@650 -- # local es=0 00:09:40.978 11:37:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57741 00:09:40.978 11:37:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:40.978 11:37:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.978 11:37:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:40.978 11:37:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.978 11:37:14 -- common/autotest_common.sh@653 -- # waitforlisten 57741 00:09:40.978 11:37:14 -- common/autotest_common.sh@829 -- # '[' -z 57741 ']' 00:09:40.978 11:37:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.978 11:37:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.978 11:37:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.978 11:37:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.978 11:37:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.978 ERROR: process (pid: 57741) is no longer running 00:09:40.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57741) - No such process 00:09:40.978 11:37:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.978 11:37:14 -- common/autotest_common.sh@862 -- # return 1 00:09:40.978 11:37:14 -- common/autotest_common.sh@653 -- # es=1 00:09:40.978 11:37:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.978 11:37:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.978 11:37:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.978 11:37:14 -- event/cpu_locks.sh@54 -- # no_locks 00:09:40.978 11:37:14 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:40.979 11:37:14 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:40.979 11:37:14 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:40.979 00:09:40.979 real 0m1.584s 00:09:40.979 user 0m1.662s 00:09:40.979 sys 0m0.443s 00:09:40.979 11:37:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:40.979 11:37:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 ************************************ 00:09:40.979 END TEST default_locks 00:09:40.979 ************************************ 00:09:41.237 11:37:14 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:41.237 11:37:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:41.237 11:37:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.237 11:37:14 -- common/autotest_common.sh@10 -- # set +x 00:09:41.237 ************************************ 00:09:41.237 START TEST default_locks_via_rpc 00:09:41.237 ************************************ 00:09:41.237 11:37:14 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:09:41.237 11:37:14 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57799 00:09:41.237 11:37:14 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:41.237 11:37:14 -- event/cpu_locks.sh@63 -- # waitforlisten 57799 00:09:41.237 11:37:14 -- common/autotest_common.sh@829 -- # '[' -z 57799 ']' 00:09:41.237 11:37:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.237 11:37:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.237 11:37:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.237 11:37:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.237 11:37:14 -- common/autotest_common.sh@10 -- # set +x 00:09:41.237 [2024-11-20 11:37:14.140410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.237 [2024-11-20 11:37:14.140479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57799 ] 00:09:41.237 [2024-11-20 11:37:14.275917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.496 [2024-11-20 11:37:14.370957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.496 [2024-11-20 11:37:14.371084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.064 11:37:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.064 11:37:15 -- common/autotest_common.sh@862 -- # return 0 00:09:42.064 11:37:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:42.064 11:37:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.064 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:09:42.064 11:37:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.064 11:37:15 -- event/cpu_locks.sh@67 -- # no_locks 00:09:42.064 11:37:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:42.064 11:37:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:42.064 11:37:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:42.064 11:37:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:42.064 11:37:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.064 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:09:42.064 11:37:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.064 11:37:15 -- event/cpu_locks.sh@71 -- # locks_exist 57799 00:09:42.064 11:37:15 -- event/cpu_locks.sh@22 -- # lslocks -p 57799 00:09:42.064 11:37:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.630 11:37:15 -- event/cpu_locks.sh@73 -- # killprocess 57799 00:09:42.630 11:37:15 -- common/autotest_common.sh@936 -- # '[' -z 57799 ']' 00:09:42.630 11:37:15 -- common/autotest_common.sh@940 -- # kill -0 57799 00:09:42.630 11:37:15 -- common/autotest_common.sh@941 -- # uname 00:09:42.630 11:37:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:42.630 11:37:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57799 00:09:42.630 11:37:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:42.630 11:37:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:42.630 killing process with pid 57799 00:09:42.630 11:37:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57799' 00:09:42.630 11:37:15 -- common/autotest_common.sh@955 -- # kill 57799 00:09:42.630 11:37:15 -- common/autotest_common.sh@960 -- # wait 57799 00:09:43.200 00:09:43.200 real 0m1.868s 00:09:43.200 user 0m1.974s 00:09:43.200 sys 0m0.570s 00:09:43.200 11:37:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.200 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:09:43.200 ************************************ 00:09:43.200 END TEST default_locks_via_rpc 00:09:43.200 ************************************ 00:09:43.200 11:37:15 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:43.200 11:37:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:43.200 11:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.200 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:09:43.200 ************************************ 00:09:43.200 START TEST non_locking_app_on_locked_coremask 00:09:43.200 ************************************ 00:09:43.200 11:37:16 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:09:43.200 11:37:16 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57863 00:09:43.200 11:37:16 -- event/cpu_locks.sh@81 -- # waitforlisten 57863 /var/tmp/spdk.sock 00:09:43.200 11:37:16 -- common/autotest_common.sh@829 -- # '[' -z 57863 ']' 00:09:43.200 11:37:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.200 11:37:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.200 11:37:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.200 11:37:16 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:43.200 11:37:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.200 11:37:16 -- common/autotest_common.sh@10 -- # set +x 00:09:43.200 [2024-11-20 11:37:16.066987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:43.200 [2024-11-20 11:37:16.067062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57863 ] 00:09:43.200 [2024-11-20 11:37:16.206921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.460 [2024-11-20 11:37:16.302054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:43.460 [2024-11-20 11:37:16.302192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.027 11:37:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.027 11:37:16 -- common/autotest_common.sh@862 -- # return 0 00:09:44.027 11:37:16 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57891 00:09:44.027 11:37:16 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:44.027 11:37:16 -- event/cpu_locks.sh@85 -- # waitforlisten 57891 /var/tmp/spdk2.sock 00:09:44.027 11:37:16 -- common/autotest_common.sh@829 -- # '[' -z 57891 ']' 00:09:44.027 11:37:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:44.027 11:37:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.027 11:37:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:44.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:44.027 11:37:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.027 11:37:16 -- common/autotest_common.sh@10 -- # set +x 00:09:44.027 [2024-11-20 11:37:16.987751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:44.027 [2024-11-20 11:37:16.987822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57891 ] 00:09:44.339 [2024-11-20 11:37:17.117847] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:44.339 [2024-11-20 11:37:17.117902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.339 [2024-11-20 11:37:17.323809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.339 [2024-11-20 11:37:17.323946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.905 11:37:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.905 11:37:17 -- common/autotest_common.sh@862 -- # return 0 00:09:44.905 11:37:17 -- event/cpu_locks.sh@87 -- # locks_exist 57863 00:09:44.905 11:37:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:44.905 11:37:17 -- event/cpu_locks.sh@22 -- # lslocks -p 57863 00:09:45.840 11:37:18 -- event/cpu_locks.sh@89 -- # killprocess 57863 00:09:45.840 11:37:18 -- common/autotest_common.sh@936 -- # '[' -z 57863 ']' 00:09:45.840 11:37:18 -- common/autotest_common.sh@940 -- # kill -0 57863 00:09:45.840 11:37:18 -- common/autotest_common.sh@941 -- # uname 00:09:45.840 11:37:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:45.840 11:37:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57863 00:09:45.840 11:37:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:45.840 11:37:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:45.840 killing process with pid 57863 00:09:45.840 11:37:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57863' 00:09:45.840 11:37:18 -- common/autotest_common.sh@955 -- # kill 57863 00:09:45.840 11:37:18 -- common/autotest_common.sh@960 -- # wait 57863 00:09:46.406 11:37:19 -- event/cpu_locks.sh@90 -- # killprocess 57891 00:09:46.406 11:37:19 -- common/autotest_common.sh@936 -- # '[' -z 57891 ']' 00:09:46.406 11:37:19 -- common/autotest_common.sh@940 -- # kill -0 57891 00:09:46.406 11:37:19 -- common/autotest_common.sh@941 -- # uname 00:09:46.406 11:37:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.406 11:37:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57891 00:09:46.406 11:37:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:46.406 11:37:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:46.406 killing process with pid 57891 00:09:46.406 11:37:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57891' 00:09:46.406 11:37:19 -- common/autotest_common.sh@955 -- # kill 57891 00:09:46.406 11:37:19 -- common/autotest_common.sh@960 -- # wait 57891 00:09:46.973 00:09:46.973 real 0m3.731s 00:09:46.973 user 0m4.083s 00:09:46.973 sys 0m1.004s 00:09:46.973 11:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.973 11:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:46.973 ************************************ 00:09:46.973 END TEST non_locking_app_on_locked_coremask 00:09:46.973 ************************************ 00:09:46.973 11:37:19 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:46.973 11:37:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.973 11:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.973 11:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:46.973 ************************************ 00:09:46.973 START TEST locking_app_on_unlocked_coremask 00:09:46.973 ************************************ 00:09:46.973 11:37:19 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:09:46.973 11:37:19 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57971 00:09:46.973 11:37:19 -- event/cpu_locks.sh@99 -- # waitforlisten 57971 /var/tmp/spdk.sock 00:09:46.973 11:37:19 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:46.973 11:37:19 -- common/autotest_common.sh@829 -- # '[' -z 57971 ']' 00:09:46.973 11:37:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.973 11:37:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.973 11:37:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.973 11:37:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.973 11:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:46.973 [2024-11-20 11:37:19.850997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:46.973 [2024-11-20 11:37:19.851063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57971 ] 00:09:46.973 [2024-11-20 11:37:19.986888] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:46.973 [2024-11-20 11:37:19.986957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.231 [2024-11-20 11:37:20.084015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.231 [2024-11-20 11:37:20.084164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.818 11:37:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.818 11:37:20 -- common/autotest_common.sh@862 -- # return 0 00:09:47.818 11:37:20 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57999 00:09:47.818 11:37:20 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:47.818 11:37:20 -- event/cpu_locks.sh@103 -- # waitforlisten 57999 /var/tmp/spdk2.sock 00:09:47.818 11:37:20 -- common/autotest_common.sh@829 -- # '[' -z 57999 ']' 00:09:47.818 11:37:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:47.818 11:37:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:47.818 11:37:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:47.818 11:37:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.818 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:09:47.818 [2024-11-20 11:37:20.835563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.818 [2024-11-20 11:37:20.835707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:09:48.076 [2024-11-20 11:37:20.974150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.335 [2024-11-20 11:37:21.185058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:48.335 [2024-11-20 11:37:21.185194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.901 11:37:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.901 11:37:21 -- common/autotest_common.sh@862 -- # return 0 00:09:48.901 11:37:21 -- event/cpu_locks.sh@105 -- # locks_exist 57999 00:09:48.901 11:37:21 -- event/cpu_locks.sh@22 -- # lslocks -p 57999 00:09:48.901 11:37:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.467 11:37:22 -- event/cpu_locks.sh@107 -- # killprocess 57971 00:09:49.467 11:37:22 -- common/autotest_common.sh@936 -- # '[' -z 57971 ']' 00:09:49.467 11:37:22 -- common/autotest_common.sh@940 -- # kill -0 57971 00:09:49.467 11:37:22 -- common/autotest_common.sh@941 -- # uname 00:09:49.467 11:37:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:49.467 11:37:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57971 00:09:49.467 11:37:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:49.467 11:37:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:49.467 killing process with pid 57971 00:09:49.467 11:37:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57971' 00:09:49.467 11:37:22 -- common/autotest_common.sh@955 -- # kill 57971 00:09:49.467 11:37:22 -- common/autotest_common.sh@960 -- # wait 57971 00:09:50.403 11:37:23 -- event/cpu_locks.sh@108 -- # killprocess 57999 00:09:50.403 11:37:23 -- common/autotest_common.sh@936 -- # '[' -z 57999 ']' 00:09:50.403 11:37:23 -- common/autotest_common.sh@940 -- # kill -0 57999 00:09:50.403 11:37:23 -- common/autotest_common.sh@941 -- # uname 00:09:50.403 11:37:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.403 11:37:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57999 00:09:50.403 11:37:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:50.403 11:37:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:50.403 killing process with pid 57999 00:09:50.403 11:37:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57999' 00:09:50.403 11:37:23 -- common/autotest_common.sh@955 -- # kill 57999 00:09:50.403 11:37:23 -- common/autotest_common.sh@960 -- # wait 57999 00:09:50.662 00:09:50.662 real 0m3.761s 00:09:50.662 user 0m4.159s 00:09:50.662 sys 0m1.031s 00:09:50.662 11:37:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:50.662 11:37:23 -- common/autotest_common.sh@10 -- # set +x 00:09:50.662 ************************************ 00:09:50.662 END TEST locking_app_on_unlocked_coremask 00:09:50.662 ************************************ 00:09:50.662 11:37:23 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:50.662 11:37:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:50.662 11:37:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.662 11:37:23 -- common/autotest_common.sh@10 -- # set +x 00:09:50.662 ************************************ 00:09:50.662 START TEST locking_app_on_locked_coremask 00:09:50.662 ************************************ 00:09:50.662 11:37:23 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:09:50.662 11:37:23 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58074 00:09:50.662 11:37:23 -- event/cpu_locks.sh@116 -- # waitforlisten 58074 /var/tmp/spdk.sock 00:09:50.662 11:37:23 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:50.662 11:37:23 -- common/autotest_common.sh@829 -- # '[' -z 58074 ']' 00:09:50.662 11:37:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.662 11:37:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.662 11:37:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.662 11:37:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.662 11:37:23 -- common/autotest_common.sh@10 -- # set +x 00:09:50.662 [2024-11-20 11:37:23.673643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:50.662 [2024-11-20 11:37:23.673742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58074 ] 00:09:50.921 [2024-11-20 11:37:23.813222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.921 [2024-11-20 11:37:23.914152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:50.921 [2024-11-20 11:37:23.914289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.859 11:37:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.859 11:37:24 -- common/autotest_common.sh@862 -- # return 0 00:09:51.859 11:37:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58102 00:09:51.859 11:37:24 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:51.859 11:37:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58102 /var/tmp/spdk2.sock 00:09:51.859 11:37:24 -- common/autotest_common.sh@650 -- # local es=0 00:09:51.859 11:37:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58102 /var/tmp/spdk2.sock 00:09:51.859 11:37:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:51.859 11:37:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.859 11:37:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:51.859 11:37:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.859 11:37:24 -- common/autotest_common.sh@653 -- # waitforlisten 58102 /var/tmp/spdk2.sock 00:09:51.859 11:37:24 -- common/autotest_common.sh@829 -- # '[' -z 58102 ']' 00:09:51.859 11:37:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.859 11:37:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.859 11:37:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.859 11:37:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.859 11:37:24 -- common/autotest_common.sh@10 -- # set +x 00:09:51.859 [2024-11-20 11:37:24.605434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:51.859 [2024-11-20 11:37:24.605517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58102 ] 00:09:51.859 [2024-11-20 11:37:24.734245] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58074 has claimed it. 00:09:51.859 [2024-11-20 11:37:24.734317] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:52.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58102) - No such process 00:09:52.428 ERROR: process (pid: 58102) is no longer running 00:09:52.428 11:37:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.428 11:37:25 -- common/autotest_common.sh@862 -- # return 1 00:09:52.428 11:37:25 -- common/autotest_common.sh@653 -- # es=1 00:09:52.428 11:37:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:52.428 11:37:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:52.428 11:37:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:52.428 11:37:25 -- event/cpu_locks.sh@122 -- # locks_exist 58074 00:09:52.428 11:37:25 -- event/cpu_locks.sh@22 -- # lslocks -p 58074 00:09:52.428 11:37:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:52.997 11:37:25 -- event/cpu_locks.sh@124 -- # killprocess 58074 00:09:52.997 11:37:25 -- common/autotest_common.sh@936 -- # '[' -z 58074 ']' 00:09:52.997 11:37:25 -- common/autotest_common.sh@940 -- # kill -0 58074 00:09:52.997 11:37:25 -- common/autotest_common.sh@941 -- # uname 00:09:52.997 11:37:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.997 11:37:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58074 00:09:52.997 killing process with pid 58074 00:09:52.997 11:37:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.997 11:37:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.997 11:37:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58074' 00:09:52.997 11:37:25 -- common/autotest_common.sh@955 -- # kill 58074 00:09:52.997 11:37:25 -- common/autotest_common.sh@960 -- # wait 58074 00:09:53.256 00:09:53.256 real 0m2.512s 00:09:53.256 user 0m2.788s 00:09:53.256 sys 0m0.635s 00:09:53.256 ************************************ 00:09:53.256 END TEST locking_app_on_locked_coremask 00:09:53.256 ************************************ 00:09:53.256 11:37:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.256 11:37:26 -- common/autotest_common.sh@10 -- # set +x 00:09:53.256 11:37:26 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:53.256 11:37:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.256 11:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.256 11:37:26 -- common/autotest_common.sh@10 -- # set +x 00:09:53.256 ************************************ 00:09:53.256 START TEST locking_overlapped_coremask 00:09:53.256 ************************************ 00:09:53.256 11:37:26 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:09:53.256 11:37:26 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58153 00:09:53.256 11:37:26 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:53.256 11:37:26 -- event/cpu_locks.sh@133 -- # waitforlisten 58153 /var/tmp/spdk.sock 00:09:53.256 11:37:26 -- common/autotest_common.sh@829 -- # '[' -z 58153 ']' 00:09:53.256 11:37:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.256 11:37:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.256 11:37:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.256 11:37:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.256 11:37:26 -- common/autotest_common.sh@10 -- # set +x 00:09:53.256 [2024-11-20 11:37:26.252763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:53.256 [2024-11-20 11:37:26.252926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58153 ] 00:09:53.516 [2024-11-20 11:37:26.389031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.516 [2024-11-20 11:37:26.494681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.516 [2024-11-20 11:37:26.495139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.516 [2024-11-20 11:37:26.495239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.516 [2024-11-20 11:37:26.495242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.493 11:37:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.493 11:37:27 -- common/autotest_common.sh@862 -- # return 0 00:09:54.493 11:37:27 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58183 00:09:54.493 11:37:27 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:54.493 11:37:27 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58183 /var/tmp/spdk2.sock 00:09:54.493 11:37:27 -- common/autotest_common.sh@650 -- # local es=0 00:09:54.493 11:37:27 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58183 /var/tmp/spdk2.sock 00:09:54.493 11:37:27 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:54.493 11:37:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.493 11:37:27 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:54.493 11:37:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.493 11:37:27 -- common/autotest_common.sh@653 -- # waitforlisten 58183 /var/tmp/spdk2.sock 00:09:54.493 11:37:27 -- common/autotest_common.sh@829 -- # '[' -z 58183 ']' 00:09:54.493 11:37:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.493 11:37:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.493 11:37:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.493 11:37:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.493 11:37:27 -- common/autotest_common.sh@10 -- # set +x 00:09:54.493 [2024-11-20 11:37:27.227502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:54.493 [2024-11-20 11:37:27.227998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58183 ] 00:09:54.493 [2024-11-20 11:37:27.356449] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58153 has claimed it. 00:09:54.493 [2024-11-20 11:37:27.356508] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:55.061 ERROR: process (pid: 58183) is no longer running 00:09:55.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58183) - No such process 00:09:55.061 11:37:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.061 11:37:27 -- common/autotest_common.sh@862 -- # return 1 00:09:55.061 11:37:27 -- common/autotest_common.sh@653 -- # es=1 00:09:55.061 11:37:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:55.061 11:37:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:55.061 11:37:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:55.061 11:37:27 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:55.061 11:37:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:55.061 11:37:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:55.061 11:37:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:55.061 11:37:27 -- event/cpu_locks.sh@141 -- # killprocess 58153 00:09:55.061 11:37:27 -- common/autotest_common.sh@936 -- # '[' -z 58153 ']' 00:09:55.061 11:37:27 -- common/autotest_common.sh@940 -- # kill -0 58153 00:09:55.061 11:37:27 -- common/autotest_common.sh@941 -- # uname 00:09:55.061 11:37:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:55.061 11:37:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58153 00:09:55.061 11:37:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:55.061 11:37:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:55.061 11:37:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58153' 00:09:55.061 killing process with pid 58153 00:09:55.061 11:37:27 -- common/autotest_common.sh@955 -- # kill 58153 00:09:55.061 11:37:27 -- common/autotest_common.sh@960 -- # wait 58153 00:09:55.321 00:09:55.321 real 0m2.107s 00:09:55.321 user 0m5.777s 00:09:55.321 sys 0m0.381s 00:09:55.321 11:37:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.321 11:37:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.321 ************************************ 00:09:55.321 END TEST locking_overlapped_coremask 00:09:55.321 ************************************ 00:09:55.321 11:37:28 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:55.321 11:37:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.321 11:37:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.321 11:37:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.321 ************************************ 00:09:55.321 START TEST locking_overlapped_coremask_via_rpc 00:09:55.321 ************************************ 00:09:55.321 11:37:28 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:09:55.321 11:37:28 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58229 00:09:55.321 11:37:28 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:55.321 11:37:28 -- event/cpu_locks.sh@149 -- # waitforlisten 58229 /var/tmp/spdk.sock 00:09:55.321 11:37:28 -- common/autotest_common.sh@829 -- # '[' -z 58229 ']' 00:09:55.321 11:37:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.321 11:37:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.321 11:37:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.321 11:37:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.321 11:37:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.581 [2024-11-20 11:37:28.413273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:55.581 [2024-11-20 11:37:28.413446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:09:55.581 [2024-11-20 11:37:28.552355] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:55.581 [2024-11-20 11:37:28.552512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.840 [2024-11-20 11:37:28.656913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:55.840 [2024-11-20 11:37:28.657346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.840 [2024-11-20 11:37:28.657434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.840 [2024-11-20 11:37:28.657435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.408 11:37:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.408 11:37:29 -- common/autotest_common.sh@862 -- # return 0 00:09:56.408 11:37:29 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58260 00:09:56.408 11:37:29 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:56.408 11:37:29 -- event/cpu_locks.sh@153 -- # waitforlisten 58260 /var/tmp/spdk2.sock 00:09:56.408 11:37:29 -- common/autotest_common.sh@829 -- # '[' -z 58260 ']' 00:09:56.408 11:37:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.408 11:37:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.408 11:37:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.408 11:37:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.408 11:37:29 -- common/autotest_common.sh@10 -- # set +x 00:09:56.408 [2024-11-20 11:37:29.413546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:56.408 [2024-11-20 11:37:29.413748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58260 ] 00:09:56.668 [2024-11-20 11:37:29.541107] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:56.668 [2024-11-20 11:37:29.541145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.928 [2024-11-20 11:37:29.757575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:56.928 [2024-11-20 11:37:29.757921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.928 [2024-11-20 11:37:29.758031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:56.928 [2024-11-20 11:37:29.758025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.497 11:37:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.497 11:37:30 -- common/autotest_common.sh@862 -- # return 0 00:09:57.497 11:37:30 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:57.497 11:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.497 11:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:57.497 11:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.497 11:37:30 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:57.497 11:37:30 -- common/autotest_common.sh@650 -- # local es=0 00:09:57.497 11:37:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:57.497 11:37:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:57.497 11:37:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.497 11:37:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:57.497 11:37:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.497 11:37:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:57.497 11:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.497 11:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:57.497 [2024-11-20 11:37:30.337775] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58229 has claimed it. 00:09:57.497 2024/11/20 11:37:30 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:09:57.497 request: 00:09:57.497 { 00:09:57.497 "method": "framework_enable_cpumask_locks", 00:09:57.497 "params": {} 00:09:57.497 } 00:09:57.497 Got JSON-RPC error response 00:09:57.497 GoRPCClient: error on JSON-RPC call 00:09:57.497 11:37:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:57.497 11:37:30 -- common/autotest_common.sh@653 -- # es=1 00:09:57.497 11:37:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:57.497 11:37:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:57.497 11:37:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:57.497 11:37:30 -- event/cpu_locks.sh@158 -- # waitforlisten 58229 /var/tmp/spdk.sock 00:09:57.497 11:37:30 -- common/autotest_common.sh@829 -- # '[' -z 58229 ']' 00:09:57.497 11:37:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.497 11:37:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.497 11:37:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.497 11:37:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.497 11:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:57.756 11:37:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.756 11:37:30 -- common/autotest_common.sh@862 -- # return 0 00:09:57.756 11:37:30 -- event/cpu_locks.sh@159 -- # waitforlisten 58260 /var/tmp/spdk2.sock 00:09:57.756 11:37:30 -- common/autotest_common.sh@829 -- # '[' -z 58260 ']' 00:09:57.756 11:37:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.756 11:37:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.756 11:37:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.756 11:37:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.756 11:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:58.015 11:37:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.015 11:37:30 -- common/autotest_common.sh@862 -- # return 0 00:09:58.015 11:37:30 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:58.015 11:37:30 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:58.015 11:37:30 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:58.015 11:37:30 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:58.015 00:09:58.015 real 0m2.453s 00:09:58.015 user 0m1.172s 00:09:58.015 sys 0m0.230s 00:09:58.015 11:37:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.015 11:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:58.015 ************************************ 00:09:58.015 END TEST locking_overlapped_coremask_via_rpc 00:09:58.015 ************************************ 00:09:58.015 11:37:30 -- event/cpu_locks.sh@174 -- # cleanup 00:09:58.015 11:37:30 -- event/cpu_locks.sh@15 -- # [[ -z 58229 ]] 00:09:58.015 11:37:30 -- event/cpu_locks.sh@15 -- # killprocess 58229 00:09:58.015 11:37:30 -- common/autotest_common.sh@936 -- # '[' -z 58229 ']' 00:09:58.015 11:37:30 -- common/autotest_common.sh@940 -- # kill -0 58229 00:09:58.015 11:37:30 -- common/autotest_common.sh@941 -- # uname 00:09:58.015 11:37:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.015 11:37:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58229 00:09:58.015 killing process with pid 58229 00:09:58.015 11:37:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:58.015 11:37:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:58.015 11:37:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58229' 00:09:58.015 11:37:30 -- common/autotest_common.sh@955 -- # kill 58229 00:09:58.015 11:37:30 -- common/autotest_common.sh@960 -- # wait 58229 00:09:58.332 11:37:31 -- event/cpu_locks.sh@16 -- # [[ -z 58260 ]] 00:09:58.332 11:37:31 -- event/cpu_locks.sh@16 -- # killprocess 58260 00:09:58.332 11:37:31 -- common/autotest_common.sh@936 -- # '[' -z 58260 ']' 00:09:58.332 11:37:31 -- common/autotest_common.sh@940 -- # kill -0 58260 00:09:58.332 11:37:31 -- common/autotest_common.sh@941 -- # uname 00:09:58.332 11:37:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.332 11:37:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58260 00:09:58.591 killing process with pid 58260 00:09:58.591 11:37:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:58.591 11:37:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:58.591 11:37:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58260' 00:09:58.591 11:37:31 -- common/autotest_common.sh@955 -- # kill 58260 00:09:58.591 11:37:31 -- common/autotest_common.sh@960 -- # wait 58260 00:09:58.850 11:37:31 -- event/cpu_locks.sh@18 -- # rm -f 00:09:58.850 11:37:31 -- event/cpu_locks.sh@1 -- # cleanup 00:09:58.850 11:37:31 -- event/cpu_locks.sh@15 -- # [[ -z 58229 ]] 00:09:58.850 11:37:31 -- event/cpu_locks.sh@15 -- # killprocess 58229 00:09:58.850 11:37:31 -- common/autotest_common.sh@936 -- # '[' -z 58229 ']' 00:09:58.850 11:37:31 -- common/autotest_common.sh@940 -- # kill -0 58229 00:09:58.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58229) - No such process 00:09:58.850 Process with pid 58229 is not found 00:09:58.850 11:37:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58229 is not found' 00:09:58.850 11:37:31 -- event/cpu_locks.sh@16 -- # [[ -z 58260 ]] 00:09:58.850 11:37:31 -- event/cpu_locks.sh@16 -- # killprocess 58260 00:09:58.850 11:37:31 -- common/autotest_common.sh@936 -- # '[' -z 58260 ']' 00:09:58.850 11:37:31 -- common/autotest_common.sh@940 -- # kill -0 58260 00:09:58.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58260) - No such process 00:09:58.850 Process with pid 58260 is not found 00:09:58.850 11:37:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58260 is not found' 00:09:58.850 11:37:31 -- event/cpu_locks.sh@18 -- # rm -f 00:09:58.850 ************************************ 00:09:58.850 END TEST cpu_locks 00:09:58.850 ************************************ 00:09:58.850 00:09:58.850 real 0m19.498s 00:09:58.850 user 0m33.397s 00:09:58.850 sys 0m5.215s 00:09:58.850 11:37:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.850 11:37:31 -- common/autotest_common.sh@10 -- # set +x 00:09:58.850 ************************************ 00:09:58.850 END TEST event 00:09:58.850 ************************************ 00:09:58.850 00:09:58.850 real 0m48.698s 00:09:58.850 user 1m34.852s 00:09:58.850 sys 0m9.196s 00:09:58.850 11:37:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.850 11:37:31 -- common/autotest_common.sh@10 -- # set +x 00:09:58.850 11:37:31 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:58.850 11:37:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:58.850 11:37:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.850 11:37:31 -- common/autotest_common.sh@10 -- # set +x 00:09:58.850 ************************************ 00:09:58.850 START TEST thread 00:09:58.850 ************************************ 00:09:58.850 11:37:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:59.109 * Looking for test storage... 00:09:59.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:59.109 11:37:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:59.110 11:37:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:59.110 11:37:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:59.110 11:37:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:59.110 11:37:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:59.110 11:37:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:59.110 11:37:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:59.110 11:37:31 -- scripts/common.sh@335 -- # IFS=.-: 00:09:59.110 11:37:31 -- scripts/common.sh@335 -- # read -ra ver1 00:09:59.110 11:37:31 -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.110 11:37:31 -- scripts/common.sh@336 -- # read -ra ver2 00:09:59.110 11:37:31 -- scripts/common.sh@337 -- # local 'op=<' 00:09:59.110 11:37:31 -- scripts/common.sh@339 -- # ver1_l=2 00:09:59.110 11:37:31 -- scripts/common.sh@340 -- # ver2_l=1 00:09:59.110 11:37:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:59.110 11:37:31 -- scripts/common.sh@343 -- # case "$op" in 00:09:59.110 11:37:31 -- scripts/common.sh@344 -- # : 1 00:09:59.110 11:37:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:59.110 11:37:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.110 11:37:32 -- scripts/common.sh@364 -- # decimal 1 00:09:59.110 11:37:32 -- scripts/common.sh@352 -- # local d=1 00:09:59.110 11:37:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.110 11:37:32 -- scripts/common.sh@354 -- # echo 1 00:09:59.110 11:37:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:59.110 11:37:32 -- scripts/common.sh@365 -- # decimal 2 00:09:59.110 11:37:32 -- scripts/common.sh@352 -- # local d=2 00:09:59.110 11:37:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.110 11:37:32 -- scripts/common.sh@354 -- # echo 2 00:09:59.110 11:37:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:59.110 11:37:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:59.110 11:37:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:59.110 11:37:32 -- scripts/common.sh@367 -- # return 0 00:09:59.110 11:37:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.110 11:37:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.110 --rc genhtml_branch_coverage=1 00:09:59.110 --rc genhtml_function_coverage=1 00:09:59.110 --rc genhtml_legend=1 00:09:59.110 --rc geninfo_all_blocks=1 00:09:59.110 --rc geninfo_unexecuted_blocks=1 00:09:59.110 00:09:59.110 ' 00:09:59.110 11:37:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.110 --rc genhtml_branch_coverage=1 00:09:59.110 --rc genhtml_function_coverage=1 00:09:59.110 --rc genhtml_legend=1 00:09:59.110 --rc geninfo_all_blocks=1 00:09:59.110 --rc geninfo_unexecuted_blocks=1 00:09:59.110 00:09:59.110 ' 00:09:59.110 11:37:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.110 --rc genhtml_branch_coverage=1 00:09:59.110 --rc genhtml_function_coverage=1 00:09:59.110 --rc genhtml_legend=1 00:09:59.110 --rc geninfo_all_blocks=1 00:09:59.110 --rc geninfo_unexecuted_blocks=1 00:09:59.110 00:09:59.110 ' 00:09:59.110 11:37:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.110 --rc genhtml_branch_coverage=1 00:09:59.110 --rc genhtml_function_coverage=1 00:09:59.110 --rc genhtml_legend=1 00:09:59.110 --rc geninfo_all_blocks=1 00:09:59.110 --rc geninfo_unexecuted_blocks=1 00:09:59.110 00:09:59.110 ' 00:09:59.110 11:37:32 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:59.110 11:37:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:59.110 11:37:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.110 11:37:32 -- common/autotest_common.sh@10 -- # set +x 00:09:59.110 ************************************ 00:09:59.110 START TEST thread_poller_perf 00:09:59.110 ************************************ 00:09:59.110 11:37:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:59.110 [2024-11-20 11:37:32.055644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:59.110 [2024-11-20 11:37:32.055849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:09:59.370 [2024-11-20 11:37:32.191204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.370 [2024-11-20 11:37:32.291439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.370 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:00.745 [2024-11-20T11:37:33.788Z] ====================================== 00:10:00.745 [2024-11-20T11:37:33.788Z] busy:2297003802 (cyc) 00:10:00.745 [2024-11-20T11:37:33.788Z] total_run_count: 365000 00:10:00.745 [2024-11-20T11:37:33.788Z] tsc_hz: 2290000000 (cyc) 00:10:00.745 [2024-11-20T11:37:33.788Z] ====================================== 00:10:00.745 [2024-11-20T11:37:33.788Z] poller_cost: 6293 (cyc), 2748 (nsec) 00:10:00.745 00:10:00.745 real 0m1.369s 00:10:00.745 user 0m1.207s 00:10:00.745 sys 0m0.055s 00:10:00.745 11:37:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:00.745 11:37:33 -- common/autotest_common.sh@10 -- # set +x 00:10:00.745 ************************************ 00:10:00.745 END TEST thread_poller_perf 00:10:00.745 ************************************ 00:10:00.745 11:37:33 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:00.745 11:37:33 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:00.745 11:37:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.745 11:37:33 -- common/autotest_common.sh@10 -- # set +x 00:10:00.745 ************************************ 00:10:00.745 START TEST thread_poller_perf 00:10:00.745 ************************************ 00:10:00.745 11:37:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:00.745 [2024-11-20 11:37:33.494048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:00.745 [2024-11-20 11:37:33.494248] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58449 ] 00:10:00.745 [2024-11-20 11:37:33.636689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.745 [2024-11-20 11:37:33.739544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.745 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:02.245 [2024-11-20T11:37:35.288Z] ====================================== 00:10:02.245 [2024-11-20T11:37:35.288Z] busy:2292439606 (cyc) 00:10:02.245 [2024-11-20T11:37:35.288Z] total_run_count: 4894000 00:10:02.245 [2024-11-20T11:37:35.288Z] tsc_hz: 2290000000 (cyc) 00:10:02.245 [2024-11-20T11:37:35.288Z] ====================================== 00:10:02.245 [2024-11-20T11:37:35.288Z] poller_cost: 468 (cyc), 204 (nsec) 00:10:02.245 00:10:02.245 real 0m1.377s 00:10:02.245 user 0m1.215s 00:10:02.245 sys 0m0.054s 00:10:02.245 11:37:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:02.245 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.245 ************************************ 00:10:02.245 END TEST thread_poller_perf 00:10:02.245 ************************************ 00:10:02.245 11:37:34 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:02.245 00:10:02.245 real 0m3.100s 00:10:02.245 user 0m2.589s 00:10:02.245 sys 0m0.306s 00:10:02.245 ************************************ 00:10:02.245 END TEST thread 00:10:02.245 ************************************ 00:10:02.245 11:37:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:02.245 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.245 11:37:34 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:02.245 11:37:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:02.246 11:37:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.246 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 ************************************ 00:10:02.246 START TEST accel 00:10:02.246 ************************************ 00:10:02.246 11:37:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:02.246 * Looking for test storage... 00:10:02.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:02.246 11:37:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:02.246 11:37:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:02.246 11:37:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:02.246 11:37:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:02.246 11:37:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:02.246 11:37:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:02.246 11:37:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:02.246 11:37:35 -- scripts/common.sh@335 -- # IFS=.-: 00:10:02.246 11:37:35 -- scripts/common.sh@335 -- # read -ra ver1 00:10:02.246 11:37:35 -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.246 11:37:35 -- scripts/common.sh@336 -- # read -ra ver2 00:10:02.246 11:37:35 -- scripts/common.sh@337 -- # local 'op=<' 00:10:02.246 11:37:35 -- scripts/common.sh@339 -- # ver1_l=2 00:10:02.246 11:37:35 -- scripts/common.sh@340 -- # ver2_l=1 00:10:02.246 11:37:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:02.246 11:37:35 -- scripts/common.sh@343 -- # case "$op" in 00:10:02.246 11:37:35 -- scripts/common.sh@344 -- # : 1 00:10:02.246 11:37:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:02.246 11:37:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.246 11:37:35 -- scripts/common.sh@364 -- # decimal 1 00:10:02.246 11:37:35 -- scripts/common.sh@352 -- # local d=1 00:10:02.246 11:37:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.246 11:37:35 -- scripts/common.sh@354 -- # echo 1 00:10:02.246 11:37:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:02.246 11:37:35 -- scripts/common.sh@365 -- # decimal 2 00:10:02.246 11:37:35 -- scripts/common.sh@352 -- # local d=2 00:10:02.246 11:37:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.246 11:37:35 -- scripts/common.sh@354 -- # echo 2 00:10:02.246 11:37:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:02.246 11:37:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:02.246 11:37:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:02.246 11:37:35 -- scripts/common.sh@367 -- # return 0 00:10:02.246 11:37:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.246 11:37:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:02.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.246 --rc genhtml_branch_coverage=1 00:10:02.246 --rc genhtml_function_coverage=1 00:10:02.246 --rc genhtml_legend=1 00:10:02.246 --rc geninfo_all_blocks=1 00:10:02.246 --rc geninfo_unexecuted_blocks=1 00:10:02.246 00:10:02.246 ' 00:10:02.246 11:37:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:02.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.246 --rc genhtml_branch_coverage=1 00:10:02.246 --rc genhtml_function_coverage=1 00:10:02.246 --rc genhtml_legend=1 00:10:02.246 --rc geninfo_all_blocks=1 00:10:02.246 --rc geninfo_unexecuted_blocks=1 00:10:02.246 00:10:02.246 ' 00:10:02.246 11:37:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:02.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.246 --rc genhtml_branch_coverage=1 00:10:02.246 --rc genhtml_function_coverage=1 00:10:02.246 --rc genhtml_legend=1 00:10:02.246 --rc geninfo_all_blocks=1 00:10:02.246 --rc geninfo_unexecuted_blocks=1 00:10:02.246 00:10:02.246 ' 00:10:02.246 11:37:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:02.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.246 --rc genhtml_branch_coverage=1 00:10:02.246 --rc genhtml_function_coverage=1 00:10:02.246 --rc genhtml_legend=1 00:10:02.246 --rc geninfo_all_blocks=1 00:10:02.246 --rc geninfo_unexecuted_blocks=1 00:10:02.246 00:10:02.246 ' 00:10:02.246 11:37:35 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:02.246 11:37:35 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:02.246 11:37:35 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:02.246 11:37:35 -- accel/accel.sh@59 -- # spdk_tgt_pid=58531 00:10:02.246 11:37:35 -- accel/accel.sh@60 -- # waitforlisten 58531 00:10:02.246 11:37:35 -- accel/accel.sh@58 -- # build_accel_config 00:10:02.246 11:37:35 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:02.246 11:37:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.246 11:37:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.246 11:37:35 -- common/autotest_common.sh@829 -- # '[' -z 58531 ']' 00:10:02.246 11:37:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.246 11:37:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.246 11:37:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.246 11:37:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.246 11:37:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.246 11:37:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.246 11:37:35 -- accel/accel.sh@42 -- # jq -r . 00:10:02.246 11:37:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.246 11:37:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.246 11:37:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 [2024-11-20 11:37:35.236350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:02.246 [2024-11-20 11:37:35.236525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58531 ] 00:10:02.536 [2024-11-20 11:37:35.360489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.536 [2024-11-20 11:37:35.462422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:02.536 [2024-11-20 11:37:35.462662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.472 11:37:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.472 11:37:36 -- common/autotest_common.sh@862 -- # return 0 00:10:03.472 11:37:36 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:03.472 11:37:36 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:03.472 11:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.472 11:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.472 11:37:36 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:03.473 11:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # IFS== 00:10:03.473 11:37:36 -- accel/accel.sh@64 -- # read -r opc module 00:10:03.473 11:37:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:03.473 11:37:36 -- accel/accel.sh@67 -- # killprocess 58531 00:10:03.473 11:37:36 -- common/autotest_common.sh@936 -- # '[' -z 58531 ']' 00:10:03.473 11:37:36 -- common/autotest_common.sh@940 -- # kill -0 58531 00:10:03.473 11:37:36 -- common/autotest_common.sh@941 -- # uname 00:10:03.473 11:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.473 11:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58531 00:10:03.473 killing process with pid 58531 00:10:03.473 11:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:03.473 11:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:03.473 11:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58531' 00:10:03.473 11:37:36 -- common/autotest_common.sh@955 -- # kill 58531 00:10:03.473 11:37:36 -- common/autotest_common.sh@960 -- # wait 58531 00:10:03.732 11:37:36 -- accel/accel.sh@68 -- # trap - ERR 00:10:03.732 11:37:36 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:03.732 11:37:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:03.732 11:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.732 11:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.732 11:37:36 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:10:03.732 11:37:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:03.732 11:37:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.732 11:37:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.732 11:37:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.732 11:37:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.732 11:37:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.732 11:37:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.732 11:37:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.732 11:37:36 -- accel/accel.sh@42 -- # jq -r . 00:10:03.732 11:37:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:03.733 11:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.733 11:37:36 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:03.733 11:37:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:03.733 11:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.733 11:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.733 ************************************ 00:10:03.733 START TEST accel_missing_filename 00:10:03.733 ************************************ 00:10:03.733 11:37:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:10:03.733 11:37:36 -- common/autotest_common.sh@650 -- # local es=0 00:10:03.733 11:37:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:03.733 11:37:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:03.733 11:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.733 11:37:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:03.733 11:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.733 11:37:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:10:03.733 11:37:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:03.733 11:37:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.733 11:37:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.733 11:37:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.733 11:37:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.733 11:37:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.733 11:37:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.733 11:37:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.733 11:37:36 -- accel/accel.sh@42 -- # jq -r . 00:10:03.733 [2024-11-20 11:37:36.744419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:03.733 [2024-11-20 11:37:36.744495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58606 ] 00:10:03.993 [2024-11-20 11:37:36.871335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.993 [2024-11-20 11:37:37.010228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.252 [2024-11-20 11:37:37.056070] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.252 [2024-11-20 11:37:37.117720] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:04.252 A filename is required. 00:10:04.252 11:37:37 -- common/autotest_common.sh@653 -- # es=234 00:10:04.252 11:37:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.252 11:37:37 -- common/autotest_common.sh@662 -- # es=106 00:10:04.252 ************************************ 00:10:04.252 END TEST accel_missing_filename 00:10:04.252 ************************************ 00:10:04.252 11:37:37 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:04.252 11:37:37 -- common/autotest_common.sh@670 -- # es=1 00:10:04.252 11:37:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.252 00:10:04.252 real 0m0.501s 00:10:04.252 user 0m0.338s 00:10:04.252 sys 0m0.107s 00:10:04.252 11:37:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.252 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.252 11:37:37 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:04.252 11:37:37 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:04.252 11:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.252 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.252 ************************************ 00:10:04.252 START TEST accel_compress_verify 00:10:04.252 ************************************ 00:10:04.252 11:37:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:04.511 11:37:37 -- common/autotest_common.sh@650 -- # local es=0 00:10:04.511 11:37:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:04.511 11:37:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:04.511 11:37:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.511 11:37:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:04.511 11:37:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.511 11:37:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:04.511 11:37:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:04.511 11:37:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.511 11:37:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.511 11:37:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.511 11:37:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.511 11:37:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.511 11:37:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.511 11:37:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.511 11:37:37 -- accel/accel.sh@42 -- # jq -r . 00:10:04.511 [2024-11-20 11:37:37.330231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.511 [2024-11-20 11:37:37.330330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58625 ] 00:10:04.511 [2024-11-20 11:37:37.469965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.770 [2024-11-20 11:37:37.572312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.770 [2024-11-20 11:37:37.616530] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.770 [2024-11-20 11:37:37.678034] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:04.770 00:10:04.770 Compression does not support the verify option, aborting. 00:10:04.770 11:37:37 -- common/autotest_common.sh@653 -- # es=161 00:10:04.770 11:37:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.770 11:37:37 -- common/autotest_common.sh@662 -- # es=33 00:10:04.770 11:37:37 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:04.770 11:37:37 -- common/autotest_common.sh@670 -- # es=1 00:10:04.770 11:37:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.770 00:10:04.770 real 0m0.494s 00:10:04.770 user 0m0.332s 00:10:04.770 sys 0m0.101s 00:10:04.770 11:37:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.770 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.770 ************************************ 00:10:04.770 END TEST accel_compress_verify 00:10:04.770 ************************************ 00:10:05.029 11:37:37 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:05.029 11:37:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:05.029 11:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.029 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:05.029 ************************************ 00:10:05.029 START TEST accel_wrong_workload 00:10:05.029 ************************************ 00:10:05.029 11:37:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:10:05.029 11:37:37 -- common/autotest_common.sh@650 -- # local es=0 00:10:05.029 11:37:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:05.029 11:37:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:05.029 11:37:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.029 11:37:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:05.029 11:37:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.029 11:37:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:10:05.029 11:37:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:05.029 11:37:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.029 11:37:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.029 11:37:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.029 11:37:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.029 11:37:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.029 11:37:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.029 11:37:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.029 11:37:37 -- accel/accel.sh@42 -- # jq -r . 00:10:05.029 Unsupported workload type: foobar 00:10:05.029 [2024-11-20 11:37:37.863334] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:05.029 accel_perf options: 00:10:05.029 [-h help message] 00:10:05.029 [-q queue depth per core] 00:10:05.029 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:05.029 [-T number of threads per core 00:10:05.029 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:05.029 [-t time in seconds] 00:10:05.029 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:05.029 [ dif_verify, , dif_generate, dif_generate_copy 00:10:05.029 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:05.029 [-l for compress/decompress workloads, name of uncompressed input file 00:10:05.029 [-S for crc32c workload, use this seed value (default 0) 00:10:05.029 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:05.029 [-f for fill workload, use this BYTE value (default 255) 00:10:05.029 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:05.029 [-y verify result if this switch is on] 00:10:05.029 [-a tasks to allocate per core (default: same value as -q)] 00:10:05.029 Can be used to spread operations across a wider range of memory. 00:10:05.029 11:37:37 -- common/autotest_common.sh@653 -- # es=1 00:10:05.029 11:37:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.029 11:37:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.029 11:37:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.029 00:10:05.029 real 0m0.032s 00:10:05.029 user 0m0.019s 00:10:05.029 sys 0m0.013s 00:10:05.029 11:37:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.029 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:05.029 ************************************ 00:10:05.030 END TEST accel_wrong_workload 00:10:05.030 ************************************ 00:10:05.030 11:37:37 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:05.030 11:37:37 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:05.030 11:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.030 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:05.030 ************************************ 00:10:05.030 START TEST accel_negative_buffers 00:10:05.030 ************************************ 00:10:05.030 11:37:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:05.030 11:37:37 -- common/autotest_common.sh@650 -- # local es=0 00:10:05.030 11:37:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:05.030 11:37:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:05.030 11:37:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.030 11:37:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:05.030 11:37:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.030 11:37:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:10:05.030 11:37:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:05.030 11:37:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.030 11:37:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.030 11:37:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.030 11:37:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.030 11:37:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.030 11:37:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.030 11:37:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.030 11:37:37 -- accel/accel.sh@42 -- # jq -r . 00:10:05.030 -x option must be non-negative. 00:10:05.030 [2024-11-20 11:37:37.955810] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:05.030 accel_perf options: 00:10:05.030 [-h help message] 00:10:05.030 [-q queue depth per core] 00:10:05.030 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:05.030 [-T number of threads per core 00:10:05.030 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:05.030 [-t time in seconds] 00:10:05.030 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:05.030 [ dif_verify, , dif_generate, dif_generate_copy 00:10:05.030 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:05.030 [-l for compress/decompress workloads, name of uncompressed input file 00:10:05.030 [-S for crc32c workload, use this seed value (default 0) 00:10:05.030 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:05.030 [-f for fill workload, use this BYTE value (default 255) 00:10:05.030 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:05.030 [-y verify result if this switch is on] 00:10:05.030 [-a tasks to allocate per core (default: same value as -q)] 00:10:05.030 Can be used to spread operations across a wider range of memory. 00:10:05.030 11:37:37 -- common/autotest_common.sh@653 -- # es=1 00:10:05.030 11:37:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.030 11:37:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.030 11:37:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.030 00:10:05.030 real 0m0.041s 00:10:05.030 user 0m0.023s 00:10:05.030 sys 0m0.017s 00:10:05.030 11:37:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.030 11:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:05.030 ************************************ 00:10:05.030 END TEST accel_negative_buffers 00:10:05.030 ************************************ 00:10:05.030 11:37:38 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:05.030 11:37:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:05.030 11:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.030 11:37:38 -- common/autotest_common.sh@10 -- # set +x 00:10:05.030 ************************************ 00:10:05.030 START TEST accel_crc32c 00:10:05.030 ************************************ 00:10:05.030 11:37:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:05.030 11:37:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:05.030 11:37:38 -- accel/accel.sh@17 -- # local accel_module 00:10:05.030 11:37:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:05.030 11:37:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.030 11:37:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.030 11:37:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:05.030 11:37:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.030 11:37:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.030 11:37:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.030 11:37:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.030 11:37:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.030 11:37:38 -- accel/accel.sh@42 -- # jq -r . 00:10:05.030 [2024-11-20 11:37:38.060613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:05.030 [2024-11-20 11:37:38.060743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58689 ] 00:10:05.288 [2024-11-20 11:37:38.186920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.288 [2024-11-20 11:37:38.310733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.689 11:37:39 -- accel/accel.sh@18 -- # out=' 00:10:06.689 SPDK Configuration: 00:10:06.689 Core mask: 0x1 00:10:06.689 00:10:06.689 Accel Perf Configuration: 00:10:06.689 Workload Type: crc32c 00:10:06.689 CRC-32C seed: 32 00:10:06.689 Transfer size: 4096 bytes 00:10:06.689 Vector count 1 00:10:06.689 Module: software 00:10:06.689 Queue depth: 32 00:10:06.689 Allocate depth: 32 00:10:06.689 # threads/core: 1 00:10:06.689 Run time: 1 seconds 00:10:06.689 Verify: Yes 00:10:06.689 00:10:06.689 Running for 1 seconds... 00:10:06.689 00:10:06.689 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:06.689 ------------------------------------------------------------------------------------ 00:10:06.689 0,0 463328/s 1809 MiB/s 0 0 00:10:06.689 ==================================================================================== 00:10:06.689 Total 463328/s 1809 MiB/s 0 0' 00:10:06.689 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.689 11:37:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:06.689 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.689 11:37:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:06.689 11:37:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.689 11:37:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.689 11:37:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.689 11:37:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.689 11:37:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.689 11:37:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.689 11:37:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.689 11:37:39 -- accel/accel.sh@42 -- # jq -r . 00:10:06.689 [2024-11-20 11:37:39.554890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.689 [2024-11-20 11:37:39.555373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58707 ] 00:10:06.689 [2024-11-20 11:37:39.694742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.990 [2024-11-20 11:37:39.798957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=0x1 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=crc32c 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=32 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=software 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@23 -- # accel_module=software 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=32 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=32 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=1 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val=Yes 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.990 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.990 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:06.990 11:37:39 -- accel/accel.sh@21 -- # val= 00:10:06.991 11:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.991 11:37:39 -- accel/accel.sh@20 -- # IFS=: 00:10:06.991 11:37:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.373 11:37:41 -- accel/accel.sh@21 -- # val= 00:10:08.373 11:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # IFS=: 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.373 11:37:41 -- accel/accel.sh@21 -- # val= 00:10:08.373 11:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # IFS=: 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.373 11:37:41 -- accel/accel.sh@21 -- # val= 00:10:08.373 11:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # IFS=: 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.373 11:37:41 -- accel/accel.sh@21 -- # val= 00:10:08.373 11:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # IFS=: 00:10:08.373 11:37:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.373 11:37:41 -- accel/accel.sh@21 -- # val= 00:10:08.374 11:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.374 11:37:41 -- accel/accel.sh@20 -- # IFS=: 00:10:08.374 11:37:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.374 11:37:41 -- accel/accel.sh@21 -- # val= 00:10:08.374 11:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.374 11:37:41 -- accel/accel.sh@20 -- # IFS=: 00:10:08.374 11:37:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.374 11:37:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:08.374 11:37:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:08.374 11:37:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.374 00:10:08.374 real 0m2.987s 00:10:08.374 user 0m2.577s 00:10:08.374 sys 0m0.213s 00:10:08.374 11:37:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.374 11:37:41 -- common/autotest_common.sh@10 -- # set +x 00:10:08.374 ************************************ 00:10:08.374 END TEST accel_crc32c 00:10:08.374 ************************************ 00:10:08.374 11:37:41 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:08.374 11:37:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:08.374 11:37:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.374 11:37:41 -- common/autotest_common.sh@10 -- # set +x 00:10:08.374 ************************************ 00:10:08.374 START TEST accel_crc32c_C2 00:10:08.374 ************************************ 00:10:08.374 11:37:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:08.374 11:37:41 -- accel/accel.sh@16 -- # local accel_opc 00:10:08.374 11:37:41 -- accel/accel.sh@17 -- # local accel_module 00:10:08.374 11:37:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:08.374 11:37:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:08.374 11:37:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.374 11:37:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.374 11:37:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.374 11:37:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.374 11:37:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.374 11:37:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.374 11:37:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.374 11:37:41 -- accel/accel.sh@42 -- # jq -r . 00:10:08.374 [2024-11-20 11:37:41.099596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.374 [2024-11-20 11:37:41.099702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58743 ] 00:10:08.374 [2024-11-20 11:37:41.225076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.374 [2024-11-20 11:37:41.308711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.760 11:37:42 -- accel/accel.sh@18 -- # out=' 00:10:09.760 SPDK Configuration: 00:10:09.760 Core mask: 0x1 00:10:09.760 00:10:09.760 Accel Perf Configuration: 00:10:09.760 Workload Type: crc32c 00:10:09.760 CRC-32C seed: 0 00:10:09.760 Transfer size: 4096 bytes 00:10:09.760 Vector count 2 00:10:09.760 Module: software 00:10:09.760 Queue depth: 32 00:10:09.760 Allocate depth: 32 00:10:09.760 # threads/core: 1 00:10:09.760 Run time: 1 seconds 00:10:09.760 Verify: Yes 00:10:09.760 00:10:09.760 Running for 1 seconds... 00:10:09.760 00:10:09.760 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:09.760 ------------------------------------------------------------------------------------ 00:10:09.760 0,0 419392/s 3276 MiB/s 0 0 00:10:09.760 ==================================================================================== 00:10:09.761 Total 419392/s 1638 MiB/s 0 0' 00:10:09.761 11:37:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:09.761 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.761 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.761 11:37:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:09.761 11:37:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.761 11:37:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.761 11:37:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.761 11:37:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.761 11:37:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.761 11:37:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.761 11:37:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.761 11:37:42 -- accel/accel.sh@42 -- # jq -r . 00:10:09.761 [2024-11-20 11:37:42.530909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.761 [2024-11-20 11:37:42.530977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58757 ] 00:10:09.761 [2024-11-20 11:37:42.654266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.761 [2024-11-20 11:37:42.757863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.019 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.019 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.019 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.019 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.019 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.019 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.019 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.019 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=0x1 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=crc32c 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=0 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=software 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@23 -- # accel_module=software 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=32 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=32 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=1 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val=Yes 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.020 11:37:42 -- accel/accel.sh@21 -- # val= 00:10:10.020 11:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # IFS=: 00:10:10.020 11:37:42 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@21 -- # val= 00:10:10.955 11:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # IFS=: 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@21 -- # val= 00:10:10.955 11:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # IFS=: 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@21 -- # val= 00:10:10.955 11:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # IFS=: 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@21 -- # val= 00:10:10.955 11:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # IFS=: 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@21 -- # val= 00:10:10.955 11:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # IFS=: 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@21 -- # val= 00:10:10.955 11:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # IFS=: 00:10:10.955 11:37:43 -- accel/accel.sh@20 -- # read -r var val 00:10:10.955 11:37:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:10.955 11:37:43 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:10.955 11:37:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.955 00:10:10.955 real 0m2.905s 00:10:10.955 user 0m2.530s 00:10:10.955 sys 0m0.182s 00:10:10.955 11:37:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:10.955 11:37:43 -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 ************************************ 00:10:10.955 END TEST accel_crc32c_C2 00:10:10.955 ************************************ 00:10:11.214 11:37:44 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:11.214 11:37:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:11.214 11:37:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.214 11:37:44 -- common/autotest_common.sh@10 -- # set +x 00:10:11.214 ************************************ 00:10:11.214 START TEST accel_copy 00:10:11.214 ************************************ 00:10:11.214 11:37:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:10:11.214 11:37:44 -- accel/accel.sh@16 -- # local accel_opc 00:10:11.214 11:37:44 -- accel/accel.sh@17 -- # local accel_module 00:10:11.214 11:37:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:11.214 11:37:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:11.214 11:37:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.214 11:37:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.214 11:37:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.214 11:37:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.214 11:37:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.214 11:37:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.214 11:37:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.214 11:37:44 -- accel/accel.sh@42 -- # jq -r . 00:10:11.214 [2024-11-20 11:37:44.069486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.214 [2024-11-20 11:37:44.069574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58797 ] 00:10:11.214 [2024-11-20 11:37:44.196436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.473 [2024-11-20 11:37:44.286878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.851 11:37:45 -- accel/accel.sh@18 -- # out=' 00:10:12.851 SPDK Configuration: 00:10:12.851 Core mask: 0x1 00:10:12.851 00:10:12.851 Accel Perf Configuration: 00:10:12.851 Workload Type: copy 00:10:12.851 Transfer size: 4096 bytes 00:10:12.851 Vector count 1 00:10:12.851 Module: software 00:10:12.851 Queue depth: 32 00:10:12.851 Allocate depth: 32 00:10:12.851 # threads/core: 1 00:10:12.851 Run time: 1 seconds 00:10:12.851 Verify: Yes 00:10:12.851 00:10:12.851 Running for 1 seconds... 00:10:12.851 00:10:12.851 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:12.851 ------------------------------------------------------------------------------------ 00:10:12.851 0,0 399424/s 1560 MiB/s 0 0 00:10:12.851 ==================================================================================== 00:10:12.851 Total 399424/s 1560 MiB/s 0 0' 00:10:12.851 11:37:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.851 11:37:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:12.851 11:37:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:12.851 11:37:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:12.851 11:37:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.851 11:37:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.851 11:37:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:12.851 11:37:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:12.851 11:37:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:12.851 11:37:45 -- accel/accel.sh@42 -- # jq -r . 00:10:12.851 [2024-11-20 11:37:45.525675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.851 [2024-11-20 11:37:45.525757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:10:12.851 [2024-11-20 11:37:45.666108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.851 [2024-11-20 11:37:45.756648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.851 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.851 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.851 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.851 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.851 11:37:45 -- accel/accel.sh@21 -- # val=0x1 00:10:12.851 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.851 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val=copy 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val=software 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@23 -- # accel_module=software 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val=32 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val=32 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val=1 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val=Yes 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.852 11:37:45 -- accel/accel.sh@21 -- # val= 00:10:12.852 11:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.852 11:37:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@21 -- # val= 00:10:14.271 11:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # IFS=: 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@21 -- # val= 00:10:14.271 11:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # IFS=: 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@21 -- # val= 00:10:14.271 11:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # IFS=: 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@21 -- # val= 00:10:14.271 11:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # IFS=: 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@21 -- # val= 00:10:14.271 11:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # IFS=: 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@21 -- # val= 00:10:14.271 11:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # IFS=: 00:10:14.271 11:37:46 -- accel/accel.sh@20 -- # read -r var val 00:10:14.271 11:37:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:14.271 11:37:46 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:14.271 11:37:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.271 00:10:14.271 real 0m2.936s 00:10:14.271 user 0m2.555s 00:10:14.271 sys 0m0.187s 00:10:14.271 11:37:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.271 11:37:46 -- common/autotest_common.sh@10 -- # set +x 00:10:14.271 ************************************ 00:10:14.271 END TEST accel_copy 00:10:14.271 ************************************ 00:10:14.271 11:37:47 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:14.271 11:37:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:14.271 11:37:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.271 11:37:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.271 ************************************ 00:10:14.271 START TEST accel_fill 00:10:14.271 ************************************ 00:10:14.271 11:37:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:14.271 11:37:47 -- accel/accel.sh@16 -- # local accel_opc 00:10:14.271 11:37:47 -- accel/accel.sh@17 -- # local accel_module 00:10:14.271 11:37:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:14.271 11:37:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:14.271 11:37:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.272 11:37:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.272 11:37:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.272 11:37:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.272 11:37:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.272 11:37:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.272 11:37:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.272 11:37:47 -- accel/accel.sh@42 -- # jq -r . 00:10:14.272 [2024-11-20 11:37:47.065407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:14.272 [2024-11-20 11:37:47.065512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58851 ] 00:10:14.272 [2024-11-20 11:37:47.203490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.272 [2024-11-20 11:37:47.306574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.652 11:37:48 -- accel/accel.sh@18 -- # out=' 00:10:15.652 SPDK Configuration: 00:10:15.652 Core mask: 0x1 00:10:15.652 00:10:15.652 Accel Perf Configuration: 00:10:15.652 Workload Type: fill 00:10:15.652 Fill pattern: 0x80 00:10:15.652 Transfer size: 4096 bytes 00:10:15.652 Vector count 1 00:10:15.652 Module: software 00:10:15.652 Queue depth: 64 00:10:15.652 Allocate depth: 64 00:10:15.652 # threads/core: 1 00:10:15.652 Run time: 1 seconds 00:10:15.652 Verify: Yes 00:10:15.652 00:10:15.652 Running for 1 seconds... 00:10:15.652 00:10:15.652 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:15.652 ------------------------------------------------------------------------------------ 00:10:15.652 0,0 582912/s 2277 MiB/s 0 0 00:10:15.652 ==================================================================================== 00:10:15.652 Total 582912/s 2277 MiB/s 0 0' 00:10:15.652 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.652 11:37:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.652 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.652 11:37:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.652 11:37:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.652 11:37:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.652 11:37:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.652 11:37:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.652 11:37:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.652 11:37:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.652 11:37:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.652 11:37:48 -- accel/accel.sh@42 -- # jq -r . 00:10:15.652 [2024-11-20 11:37:48.546986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:15.652 [2024-11-20 11:37:48.547678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58865 ] 00:10:15.652 [2024-11-20 11:37:48.685793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.912 [2024-11-20 11:37:48.785022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=0x1 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=fill 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=0x80 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=software 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@23 -- # accel_module=software 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=64 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=64 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=1 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val=Yes 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.912 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.912 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.912 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.913 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.913 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:15.913 11:37:48 -- accel/accel.sh@21 -- # val= 00:10:15.913 11:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.913 11:37:48 -- accel/accel.sh@20 -- # IFS=: 00:10:15.913 11:37:48 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@21 -- # val= 00:10:17.374 11:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # IFS=: 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@21 -- # val= 00:10:17.374 11:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # IFS=: 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@21 -- # val= 00:10:17.374 11:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # IFS=: 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@21 -- # val= 00:10:17.374 11:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # IFS=: 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@21 -- # val= 00:10:17.374 11:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # IFS=: 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@21 -- # val= 00:10:17.374 11:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # IFS=: 00:10:17.374 11:37:49 -- accel/accel.sh@20 -- # read -r var val 00:10:17.374 11:37:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:17.374 11:37:49 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:17.374 11:37:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:17.374 00:10:17.374 real 0m2.967s 00:10:17.374 user 0m2.577s 00:10:17.374 sys 0m0.197s 00:10:17.374 11:37:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.374 11:37:49 -- common/autotest_common.sh@10 -- # set +x 00:10:17.374 ************************************ 00:10:17.374 END TEST accel_fill 00:10:17.374 ************************************ 00:10:17.374 11:37:50 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:17.374 11:37:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:17.374 11:37:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.374 11:37:50 -- common/autotest_common.sh@10 -- # set +x 00:10:17.374 ************************************ 00:10:17.374 START TEST accel_copy_crc32c 00:10:17.374 ************************************ 00:10:17.374 11:37:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:10:17.374 11:37:50 -- accel/accel.sh@16 -- # local accel_opc 00:10:17.374 11:37:50 -- accel/accel.sh@17 -- # local accel_module 00:10:17.374 11:37:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:17.374 11:37:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:17.374 11:37:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.374 11:37:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.374 11:37:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.374 11:37:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.374 11:37:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.374 11:37:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.374 11:37:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.374 11:37:50 -- accel/accel.sh@42 -- # jq -r . 00:10:17.374 [2024-11-20 11:37:50.102060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:17.374 [2024-11-20 11:37:50.102207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:10:17.374 [2024-11-20 11:37:50.237992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.374 [2024-11-20 11:37:50.338999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.752 11:37:51 -- accel/accel.sh@18 -- # out=' 00:10:18.752 SPDK Configuration: 00:10:18.752 Core mask: 0x1 00:10:18.752 00:10:18.752 Accel Perf Configuration: 00:10:18.752 Workload Type: copy_crc32c 00:10:18.752 CRC-32C seed: 0 00:10:18.752 Vector size: 4096 bytes 00:10:18.752 Transfer size: 4096 bytes 00:10:18.752 Vector count 1 00:10:18.752 Module: software 00:10:18.752 Queue depth: 32 00:10:18.752 Allocate depth: 32 00:10:18.752 # threads/core: 1 00:10:18.752 Run time: 1 seconds 00:10:18.752 Verify: Yes 00:10:18.752 00:10:18.752 Running for 1 seconds... 00:10:18.752 00:10:18.752 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.752 ------------------------------------------------------------------------------------ 00:10:18.752 0,0 291936/s 1140 MiB/s 0 0 00:10:18.752 ==================================================================================== 00:10:18.752 Total 291936/s 1140 MiB/s 0 0' 00:10:18.752 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:18.752 11:37:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:18.752 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:18.752 11:37:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.752 11:37:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:18.752 11:37:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.752 11:37:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.752 11:37:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.752 11:37:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.752 11:37:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.752 11:37:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.752 11:37:51 -- accel/accel.sh@42 -- # jq -r . 00:10:18.752 [2024-11-20 11:37:51.579497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.752 [2024-11-20 11:37:51.579594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:10:18.752 [2024-11-20 11:37:51.717183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.012 [2024-11-20 11:37:51.814280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=0x1 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=0 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=software 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@23 -- # accel_module=software 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=32 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=32 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=1 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val=Yes 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:19.012 11:37:51 -- accel/accel.sh@21 -- # val= 00:10:19.012 11:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # IFS=: 00:10:19.012 11:37:51 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@21 -- # val= 00:10:20.391 11:37:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # IFS=: 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@21 -- # val= 00:10:20.391 11:37:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # IFS=: 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@21 -- # val= 00:10:20.391 11:37:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # IFS=: 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@21 -- # val= 00:10:20.391 11:37:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # IFS=: 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@21 -- # val= 00:10:20.391 11:37:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # IFS=: 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@21 -- # val= 00:10:20.391 11:37:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # IFS=: 00:10:20.391 11:37:53 -- accel/accel.sh@20 -- # read -r var val 00:10:20.391 11:37:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:20.391 11:37:53 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:20.391 11:37:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.391 00:10:20.391 real 0m2.969s 00:10:20.391 user 0m2.569s 00:10:20.391 sys 0m0.202s 00:10:20.391 11:37:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.391 11:37:53 -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 ************************************ 00:10:20.391 END TEST accel_copy_crc32c 00:10:20.391 ************************************ 00:10:20.391 11:37:53 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:20.391 11:37:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:20.391 11:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.391 11:37:53 -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 ************************************ 00:10:20.391 START TEST accel_copy_crc32c_C2 00:10:20.391 ************************************ 00:10:20.391 11:37:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:20.391 11:37:53 -- accel/accel.sh@16 -- # local accel_opc 00:10:20.391 11:37:53 -- accel/accel.sh@17 -- # local accel_module 00:10:20.391 11:37:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:20.391 11:37:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:20.391 11:37:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.391 11:37:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.391 11:37:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.391 11:37:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.391 11:37:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.391 11:37:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.391 11:37:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.391 11:37:53 -- accel/accel.sh@42 -- # jq -r . 00:10:20.391 [2024-11-20 11:37:53.113531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:20.391 [2024-11-20 11:37:53.113712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58954 ] 00:10:20.391 [2024-11-20 11:37:53.254053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.391 [2024-11-20 11:37:53.359049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.768 11:37:54 -- accel/accel.sh@18 -- # out=' 00:10:21.768 SPDK Configuration: 00:10:21.768 Core mask: 0x1 00:10:21.768 00:10:21.768 Accel Perf Configuration: 00:10:21.768 Workload Type: copy_crc32c 00:10:21.768 CRC-32C seed: 0 00:10:21.768 Vector size: 4096 bytes 00:10:21.768 Transfer size: 8192 bytes 00:10:21.768 Vector count 2 00:10:21.768 Module: software 00:10:21.768 Queue depth: 32 00:10:21.768 Allocate depth: 32 00:10:21.768 # threads/core: 1 00:10:21.768 Run time: 1 seconds 00:10:21.768 Verify: Yes 00:10:21.768 00:10:21.768 Running for 1 seconds... 00:10:21.768 00:10:21.768 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.768 ------------------------------------------------------------------------------------ 00:10:21.769 0,0 192416/s 1503 MiB/s 0 0 00:10:21.769 ==================================================================================== 00:10:21.769 Total 192416/s 751 MiB/s 0 0' 00:10:21.769 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:21.769 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:21.769 11:37:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:21.769 11:37:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.769 11:37:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.769 11:37:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.769 11:37:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.769 11:37:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.769 11:37:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.769 11:37:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.769 11:37:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:21.769 11:37:54 -- accel/accel.sh@42 -- # jq -r . 00:10:21.769 [2024-11-20 11:37:54.602120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.769 [2024-11-20 11:37:54.602302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58973 ] 00:10:21.769 [2024-11-20 11:37:54.740622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.028 [2024-11-20 11:37:54.845936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.028 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.028 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.028 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.028 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.028 11:37:54 -- accel/accel.sh@21 -- # val=0x1 00:10:22.028 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.028 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.028 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.028 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.028 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.028 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=0 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=software 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@23 -- # accel_module=software 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=32 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=32 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=1 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val=Yes 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:22.029 11:37:54 -- accel/accel.sh@21 -- # val= 00:10:22.029 11:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # IFS=: 00:10:22.029 11:37:54 -- accel/accel.sh@20 -- # read -r var val 00:10:23.406 11:37:56 -- accel/accel.sh@21 -- # val= 00:10:23.406 11:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # IFS=: 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # read -r var val 00:10:23.406 11:37:56 -- accel/accel.sh@21 -- # val= 00:10:23.406 11:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # IFS=: 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # read -r var val 00:10:23.406 11:37:56 -- accel/accel.sh@21 -- # val= 00:10:23.406 11:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # IFS=: 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # read -r var val 00:10:23.406 11:37:56 -- accel/accel.sh@21 -- # val= 00:10:23.406 11:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # IFS=: 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # read -r var val 00:10:23.406 11:37:56 -- accel/accel.sh@21 -- # val= 00:10:23.406 11:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # IFS=: 00:10:23.406 11:37:56 -- accel/accel.sh@20 -- # read -r var val 00:10:23.406 11:37:56 -- accel/accel.sh@21 -- # val= 00:10:23.407 11:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.407 11:37:56 -- accel/accel.sh@20 -- # IFS=: 00:10:23.407 11:37:56 -- accel/accel.sh@20 -- # read -r var val 00:10:23.407 11:37:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:23.407 11:37:56 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:23.407 11:37:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.407 00:10:23.407 real 0m2.985s 00:10:23.407 user 0m2.580s 00:10:23.407 sys 0m0.206s 00:10:23.407 11:37:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.407 ************************************ 00:10:23.407 END TEST accel_copy_crc32c_C2 00:10:23.407 ************************************ 00:10:23.407 11:37:56 -- common/autotest_common.sh@10 -- # set +x 00:10:23.407 11:37:56 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:23.407 11:37:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:23.407 11:37:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.407 11:37:56 -- common/autotest_common.sh@10 -- # set +x 00:10:23.407 ************************************ 00:10:23.407 START TEST accel_dualcast 00:10:23.407 ************************************ 00:10:23.407 11:37:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:10:23.407 11:37:56 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.407 11:37:56 -- accel/accel.sh@17 -- # local accel_module 00:10:23.407 11:37:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:23.407 11:37:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:23.407 11:37:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.407 11:37:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.407 11:37:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.407 11:37:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.407 11:37:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.407 11:37:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.407 11:37:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.407 11:37:56 -- accel/accel.sh@42 -- # jq -r . 00:10:23.407 [2024-11-20 11:37:56.152881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:23.407 [2024-11-20 11:37:56.153081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:10:23.407 [2024-11-20 11:37:56.291332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.407 [2024-11-20 11:37:56.396970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.783 11:37:57 -- accel/accel.sh@18 -- # out=' 00:10:24.783 SPDK Configuration: 00:10:24.783 Core mask: 0x1 00:10:24.783 00:10:24.783 Accel Perf Configuration: 00:10:24.783 Workload Type: dualcast 00:10:24.783 Transfer size: 4096 bytes 00:10:24.783 Vector count 1 00:10:24.783 Module: software 00:10:24.783 Queue depth: 32 00:10:24.783 Allocate depth: 32 00:10:24.783 # threads/core: 1 00:10:24.783 Run time: 1 seconds 00:10:24.783 Verify: Yes 00:10:24.783 00:10:24.783 Running for 1 seconds... 00:10:24.783 00:10:24.783 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:24.783 ------------------------------------------------------------------------------------ 00:10:24.783 0,0 423872/s 1655 MiB/s 0 0 00:10:24.783 ==================================================================================== 00:10:24.783 Total 423872/s 1655 MiB/s 0 0' 00:10:24.783 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.783 11:37:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:24.783 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.783 11:37:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.783 11:37:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:24.783 11:37:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.783 11:37:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.783 11:37:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.783 11:37:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.783 11:37:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.783 11:37:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.783 11:37:57 -- accel/accel.sh@42 -- # jq -r . 00:10:24.783 [2024-11-20 11:37:57.641623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:24.783 [2024-11-20 11:37:57.641749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59027 ] 00:10:24.783 [2024-11-20 11:37:57.780877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.042 [2024-11-20 11:37:57.885781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=0x1 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=dualcast 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=software 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@23 -- # accel_module=software 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=32 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=32 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=1 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val=Yes 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:25.042 11:37:57 -- accel/accel.sh@21 -- # val= 00:10:25.042 11:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # IFS=: 00:10:25.042 11:37:57 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@21 -- # val= 00:10:26.424 11:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@21 -- # val= 00:10:26.424 11:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@21 -- # val= 00:10:26.424 11:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@21 -- # val= 00:10:26.424 11:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@21 -- # val= 00:10:26.424 11:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@21 -- # val= 00:10:26.424 11:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.424 11:37:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.424 11:37:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:26.424 11:37:59 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:26.424 11:37:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.424 00:10:26.424 real 0m2.983s 00:10:26.424 user 0m2.588s 00:10:26.424 sys 0m0.196s 00:10:26.424 11:37:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.424 11:37:59 -- common/autotest_common.sh@10 -- # set +x 00:10:26.424 ************************************ 00:10:26.424 END TEST accel_dualcast 00:10:26.424 ************************************ 00:10:26.424 11:37:59 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:26.424 11:37:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:26.424 11:37:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.424 11:37:59 -- common/autotest_common.sh@10 -- # set +x 00:10:26.424 ************************************ 00:10:26.424 START TEST accel_compare 00:10:26.424 ************************************ 00:10:26.424 11:37:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:26.424 11:37:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:26.424 11:37:59 -- accel/accel.sh@17 -- # local accel_module 00:10:26.424 11:37:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:26.425 11:37:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:26.425 11:37:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.425 11:37:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.425 11:37:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.425 11:37:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.425 11:37:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.425 11:37:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.425 11:37:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.425 11:37:59 -- accel/accel.sh@42 -- # jq -r . 00:10:26.425 [2024-11-20 11:37:59.176104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:26.425 [2024-11-20 11:37:59.176239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:10:26.425 [2024-11-20 11:37:59.320834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.425 [2024-11-20 11:37:59.420710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.810 11:38:00 -- accel/accel.sh@18 -- # out=' 00:10:27.810 SPDK Configuration: 00:10:27.810 Core mask: 0x1 00:10:27.810 00:10:27.810 Accel Perf Configuration: 00:10:27.810 Workload Type: compare 00:10:27.810 Transfer size: 4096 bytes 00:10:27.810 Vector count 1 00:10:27.810 Module: software 00:10:27.810 Queue depth: 32 00:10:27.810 Allocate depth: 32 00:10:27.810 # threads/core: 1 00:10:27.810 Run time: 1 seconds 00:10:27.810 Verify: Yes 00:10:27.810 00:10:27.810 Running for 1 seconds... 00:10:27.810 00:10:27.810 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:27.810 ------------------------------------------------------------------------------------ 00:10:27.810 0,0 528416/s 2064 MiB/s 0 0 00:10:27.810 ==================================================================================== 00:10:27.810 Total 528416/s 2064 MiB/s 0 0' 00:10:27.810 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.810 11:38:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:27.810 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.810 11:38:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:27.810 11:38:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.810 11:38:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.810 11:38:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.810 11:38:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.810 11:38:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.810 11:38:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.810 11:38:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.810 11:38:00 -- accel/accel.sh@42 -- # jq -r . 00:10:27.810 [2024-11-20 11:38:00.646424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:27.810 [2024-11-20 11:38:00.646589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:10:27.810 [2024-11-20 11:38:00.782307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.069 [2024-11-20 11:38:00.886274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=0x1 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=compare 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=software 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@23 -- # accel_module=software 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=32 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=32 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=1 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val=Yes 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.069 11:38:00 -- accel/accel.sh@21 -- # val= 00:10:28.069 11:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # IFS=: 00:10:28.069 11:38:00 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@21 -- # val= 00:10:29.448 11:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@21 -- # val= 00:10:29.448 11:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@21 -- # val= 00:10:29.448 11:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@21 -- # val= 00:10:29.448 11:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@21 -- # val= 00:10:29.448 11:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@21 -- # val= 00:10:29.448 11:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.448 11:38:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.448 11:38:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:29.448 11:38:02 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:29.448 11:38:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.448 00:10:29.448 real 0m2.938s 00:10:29.448 user 0m1.289s 00:10:29.448 sys 0m0.089s 00:10:29.448 11:38:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:29.448 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:10:29.448 ************************************ 00:10:29.448 END TEST accel_compare 00:10:29.448 ************************************ 00:10:29.448 11:38:02 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:29.448 11:38:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:29.448 11:38:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:29.448 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:10:29.448 ************************************ 00:10:29.448 START TEST accel_xor 00:10:29.448 ************************************ 00:10:29.448 11:38:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:29.448 11:38:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.448 11:38:02 -- accel/accel.sh@17 -- # local accel_module 00:10:29.448 11:38:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:29.448 11:38:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:29.448 11:38:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.448 11:38:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.448 11:38:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.448 11:38:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.448 11:38:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.448 11:38:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.448 11:38:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.448 11:38:02 -- accel/accel.sh@42 -- # jq -r . 00:10:29.448 [2024-11-20 11:38:02.176198] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:29.448 [2024-11-20 11:38:02.176333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59110 ] 00:10:29.448 [2024-11-20 11:38:02.313124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.448 [2024-11-20 11:38:02.402977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.846 11:38:03 -- accel/accel.sh@18 -- # out=' 00:10:30.846 SPDK Configuration: 00:10:30.846 Core mask: 0x1 00:10:30.846 00:10:30.846 Accel Perf Configuration: 00:10:30.846 Workload Type: xor 00:10:30.846 Source buffers: 2 00:10:30.846 Transfer size: 4096 bytes 00:10:30.846 Vector count 1 00:10:30.846 Module: software 00:10:30.846 Queue depth: 32 00:10:30.846 Allocate depth: 32 00:10:30.846 # threads/core: 1 00:10:30.846 Run time: 1 seconds 00:10:30.846 Verify: Yes 00:10:30.846 00:10:30.846 Running for 1 seconds... 00:10:30.846 00:10:30.846 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:30.846 ------------------------------------------------------------------------------------ 00:10:30.846 0,0 361760/s 1413 MiB/s 0 0 00:10:30.846 ==================================================================================== 00:10:30.847 Total 361760/s 1413 MiB/s 0 0' 00:10:30.847 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:30.847 11:38:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:30.847 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:30.847 11:38:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:30.847 11:38:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.847 11:38:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.847 11:38:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.847 11:38:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.847 11:38:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.847 11:38:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.847 11:38:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.847 11:38:03 -- accel/accel.sh@42 -- # jq -r . 00:10:30.847 [2024-11-20 11:38:03.646067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.847 [2024-11-20 11:38:03.646771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:10:30.847 [2024-11-20 11:38:03.785495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.847 [2024-11-20 11:38:03.879597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=0x1 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=xor 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=2 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=software 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@23 -- # accel_module=software 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=32 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=32 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=1 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val=Yes 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.105 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:31.105 11:38:03 -- accel/accel.sh@21 -- # val= 00:10:31.105 11:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.106 11:38:03 -- accel/accel.sh@20 -- # IFS=: 00:10:31.106 11:38:03 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@21 -- # val= 00:10:32.485 11:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # IFS=: 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@21 -- # val= 00:10:32.485 11:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # IFS=: 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@21 -- # val= 00:10:32.485 11:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # IFS=: 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@21 -- # val= 00:10:32.485 11:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # IFS=: 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@21 -- # val= 00:10:32.485 11:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # IFS=: 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@21 -- # val= 00:10:32.485 11:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # IFS=: 00:10:32.485 11:38:05 -- accel/accel.sh@20 -- # read -r var val 00:10:32.485 11:38:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.485 11:38:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:32.485 11:38:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.485 00:10:32.485 real 0m2.944s 00:10:32.485 user 0m1.275s 00:10:32.485 sys 0m0.097s 00:10:32.485 11:38:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.485 11:38:05 -- common/autotest_common.sh@10 -- # set +x 00:10:32.485 ************************************ 00:10:32.485 END TEST accel_xor 00:10:32.485 ************************************ 00:10:32.485 11:38:05 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:32.485 11:38:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:32.485 11:38:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.485 11:38:05 -- common/autotest_common.sh@10 -- # set +x 00:10:32.485 ************************************ 00:10:32.485 START TEST accel_xor 00:10:32.485 ************************************ 00:10:32.485 11:38:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:10:32.485 11:38:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.485 11:38:05 -- accel/accel.sh@17 -- # local accel_module 00:10:32.485 11:38:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:32.485 11:38:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:32.485 11:38:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.485 11:38:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.485 11:38:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.485 11:38:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.485 11:38:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.485 11:38:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.485 11:38:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.485 11:38:05 -- accel/accel.sh@42 -- # jq -r . 00:10:32.485 [2024-11-20 11:38:05.176967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.485 [2024-11-20 11:38:05.177148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59164 ] 00:10:32.485 [2024-11-20 11:38:05.314207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.485 [2024-11-20 11:38:05.409999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.867 11:38:06 -- accel/accel.sh@18 -- # out=' 00:10:33.867 SPDK Configuration: 00:10:33.867 Core mask: 0x1 00:10:33.867 00:10:33.867 Accel Perf Configuration: 00:10:33.867 Workload Type: xor 00:10:33.867 Source buffers: 3 00:10:33.867 Transfer size: 4096 bytes 00:10:33.867 Vector count 1 00:10:33.867 Module: software 00:10:33.867 Queue depth: 32 00:10:33.867 Allocate depth: 32 00:10:33.867 # threads/core: 1 00:10:33.867 Run time: 1 seconds 00:10:33.867 Verify: Yes 00:10:33.867 00:10:33.867 Running for 1 seconds... 00:10:33.867 00:10:33.867 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:33.867 ------------------------------------------------------------------------------------ 00:10:33.867 0,0 392320/s 1532 MiB/s 0 0 00:10:33.867 ==================================================================================== 00:10:33.867 Total 392320/s 1532 MiB/s 0 0' 00:10:33.867 11:38:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.867 11:38:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.867 11:38:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.867 11:38:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.867 11:38:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.867 11:38:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.867 11:38:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.867 11:38:06 -- accel/accel.sh@42 -- # jq -r . 00:10:33.867 [2024-11-20 11:38:06.631607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:33.867 [2024-11-20 11:38:06.631846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:10:33.867 [2024-11-20 11:38:06.758086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.867 [2024-11-20 11:38:06.851924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val=0x1 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val=xor 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val=3 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:33.867 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:33.867 11:38:06 -- accel/accel.sh@21 -- # val=software 00:10:33.867 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.867 11:38:06 -- accel/accel.sh@23 -- # accel_module=software 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val=32 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val=32 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val=1 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val=Yes 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:34.127 11:38:06 -- accel/accel.sh@21 -- # val= 00:10:34.127 11:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # IFS=: 00:10:34.127 11:38:06 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@21 -- # val= 00:10:35.066 11:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # IFS=: 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@21 -- # val= 00:10:35.066 11:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # IFS=: 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@21 -- # val= 00:10:35.066 11:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # IFS=: 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@21 -- # val= 00:10:35.066 11:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # IFS=: 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@21 -- # val= 00:10:35.066 11:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # IFS=: 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@21 -- # val= 00:10:35.066 11:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # IFS=: 00:10:35.066 11:38:08 -- accel/accel.sh@20 -- # read -r var val 00:10:35.066 11:38:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.066 11:38:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:35.066 11:38:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.066 00:10:35.066 real 0m2.924s 00:10:35.066 user 0m2.545s 00:10:35.066 sys 0m0.182s 00:10:35.066 11:38:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:35.066 11:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:35.066 ************************************ 00:10:35.066 END TEST accel_xor 00:10:35.066 ************************************ 00:10:35.326 11:38:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:35.326 11:38:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:35.326 11:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.326 11:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:35.326 ************************************ 00:10:35.326 START TEST accel_dif_verify 00:10:35.326 ************************************ 00:10:35.326 11:38:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:10:35.326 11:38:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.326 11:38:08 -- accel/accel.sh@17 -- # local accel_module 00:10:35.326 11:38:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:35.326 11:38:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:35.326 11:38:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.326 11:38:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.326 11:38:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.326 11:38:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.326 11:38:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.326 11:38:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.326 11:38:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.326 11:38:08 -- accel/accel.sh@42 -- # jq -r . 00:10:35.326 [2024-11-20 11:38:08.163326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.326 [2024-11-20 11:38:08.163500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59218 ] 00:10:35.326 [2024-11-20 11:38:08.301644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.585 [2024-11-20 11:38:08.385084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.991 11:38:09 -- accel/accel.sh@18 -- # out=' 00:10:36.991 SPDK Configuration: 00:10:36.991 Core mask: 0x1 00:10:36.991 00:10:36.991 Accel Perf Configuration: 00:10:36.991 Workload Type: dif_verify 00:10:36.991 Vector size: 4096 bytes 00:10:36.991 Transfer size: 4096 bytes 00:10:36.991 Block size: 512 bytes 00:10:36.991 Metadata size: 8 bytes 00:10:36.991 Vector count 1 00:10:36.991 Module: software 00:10:36.991 Queue depth: 32 00:10:36.991 Allocate depth: 32 00:10:36.991 # threads/core: 1 00:10:36.991 Run time: 1 seconds 00:10:36.991 Verify: No 00:10:36.991 00:10:36.991 Running for 1 seconds... 00:10:36.991 00:10:36.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:36.991 ------------------------------------------------------------------------------------ 00:10:36.991 0,0 119200/s 472 MiB/s 0 0 00:10:36.991 ==================================================================================== 00:10:36.991 Total 119200/s 465 MiB/s 0 0' 00:10:36.991 11:38:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.991 11:38:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.991 11:38:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.991 11:38:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.991 11:38:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.991 11:38:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.991 11:38:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.991 11:38:09 -- accel/accel.sh@42 -- # jq -r . 00:10:36.991 [2024-11-20 11:38:09.608601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:36.991 [2024-11-20 11:38:09.608724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59240 ] 00:10:36.991 [2024-11-20 11:38:09.747709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.991 [2024-11-20 11:38:09.840169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val=0x1 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val=dif_verify 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.991 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.991 11:38:09 -- accel/accel.sh@21 -- # val=software 00:10:36.991 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.991 11:38:09 -- accel/accel.sh@23 -- # accel_module=software 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val=32 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val=32 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val=1 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val=No 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:36.992 11:38:09 -- accel/accel.sh@21 -- # val= 00:10:36.992 11:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # IFS=: 00:10:36.992 11:38:09 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@21 -- # val= 00:10:38.370 11:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@21 -- # val= 00:10:38.370 11:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@21 -- # val= 00:10:38.370 11:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@21 -- # val= 00:10:38.370 11:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@21 -- # val= 00:10:38.370 11:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@21 -- # val= 00:10:38.370 11:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.370 11:38:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.370 11:38:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:38.370 11:38:11 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:38.370 11:38:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.370 00:10:38.370 real 0m2.925s 00:10:38.370 user 0m2.547s 00:10:38.370 sys 0m0.182s 00:10:38.370 11:38:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:38.370 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 ************************************ 00:10:38.370 END TEST accel_dif_verify 00:10:38.370 ************************************ 00:10:38.370 11:38:11 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:38.370 11:38:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:38.370 11:38:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.370 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.370 ************************************ 00:10:38.370 START TEST accel_dif_generate 00:10:38.370 ************************************ 00:10:38.370 11:38:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:10:38.370 11:38:11 -- accel/accel.sh@16 -- # local accel_opc 00:10:38.370 11:38:11 -- accel/accel.sh@17 -- # local accel_module 00:10:38.370 11:38:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:38.370 11:38:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:38.370 11:38:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.370 11:38:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.370 11:38:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.370 11:38:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.370 11:38:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.370 11:38:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.370 11:38:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.370 11:38:11 -- accel/accel.sh@42 -- # jq -r . 00:10:38.370 [2024-11-20 11:38:11.143593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:38.370 [2024-11-20 11:38:11.143774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:10:38.370 [2024-11-20 11:38:11.281749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.370 [2024-11-20 11:38:11.387175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.749 11:38:12 -- accel/accel.sh@18 -- # out=' 00:10:39.749 SPDK Configuration: 00:10:39.749 Core mask: 0x1 00:10:39.749 00:10:39.749 Accel Perf Configuration: 00:10:39.749 Workload Type: dif_generate 00:10:39.749 Vector size: 4096 bytes 00:10:39.749 Transfer size: 4096 bytes 00:10:39.749 Block size: 512 bytes 00:10:39.749 Metadata size: 8 bytes 00:10:39.749 Vector count 1 00:10:39.749 Module: software 00:10:39.749 Queue depth: 32 00:10:39.749 Allocate depth: 32 00:10:39.749 # threads/core: 1 00:10:39.749 Run time: 1 seconds 00:10:39.749 Verify: No 00:10:39.749 00:10:39.749 Running for 1 seconds... 00:10:39.749 00:10:39.749 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:39.749 ------------------------------------------------------------------------------------ 00:10:39.749 0,0 134496/s 533 MiB/s 0 0 00:10:39.749 ==================================================================================== 00:10:39.749 Total 134496/s 525 MiB/s 0 0' 00:10:39.749 11:38:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:39.749 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:39.749 11:38:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:39.749 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:39.749 11:38:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.749 11:38:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.749 11:38:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.749 11:38:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.749 11:38:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.749 11:38:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.749 11:38:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.749 11:38:12 -- accel/accel.sh@42 -- # jq -r . 00:10:39.749 [2024-11-20 11:38:12.609452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:39.749 [2024-11-20 11:38:12.609513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:10:39.749 [2024-11-20 11:38:12.748202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.009 [2024-11-20 11:38:12.849230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val=0x1 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val=dif_generate 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val=software 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val=32 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val=32 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val=1 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.009 11:38:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.009 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.009 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.010 11:38:12 -- accel/accel.sh@21 -- # val=No 00:10:40.010 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.010 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.010 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.010 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.010 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.010 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.010 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:40.010 11:38:12 -- accel/accel.sh@21 -- # val= 00:10:40.010 11:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.010 11:38:12 -- accel/accel.sh@20 -- # IFS=: 00:10:40.010 11:38:12 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@21 -- # val= 00:10:41.392 11:38:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@21 -- # val= 00:10:41.392 11:38:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@21 -- # val= 00:10:41.392 11:38:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@21 -- # val= 00:10:41.392 11:38:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@21 -- # val= 00:10:41.392 11:38:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@21 -- # val= 00:10:41.392 11:38:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.392 11:38:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.392 11:38:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:41.392 11:38:14 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:41.392 11:38:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.392 00:10:41.392 real 0m2.950s 00:10:41.392 user 0m2.562s 00:10:41.392 sys 0m0.191s 00:10:41.392 11:38:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:41.392 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 ************************************ 00:10:41.392 END TEST accel_dif_generate 00:10:41.392 ************************************ 00:10:41.392 11:38:14 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:41.392 11:38:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:41.392 11:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.392 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 ************************************ 00:10:41.392 START TEST accel_dif_generate_copy 00:10:41.392 ************************************ 00:10:41.392 11:38:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:10:41.392 11:38:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:41.392 11:38:14 -- accel/accel.sh@17 -- # local accel_module 00:10:41.392 11:38:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:41.392 11:38:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:41.392 11:38:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.392 11:38:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.392 11:38:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.392 11:38:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.392 11:38:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.392 11:38:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.392 11:38:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.392 11:38:14 -- accel/accel.sh@42 -- # jq -r . 00:10:41.393 [2024-11-20 11:38:14.151567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:41.393 [2024-11-20 11:38:14.151810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:10:41.393 [2024-11-20 11:38:14.290429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.393 [2024-11-20 11:38:14.390141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.800 11:38:15 -- accel/accel.sh@18 -- # out=' 00:10:42.800 SPDK Configuration: 00:10:42.800 Core mask: 0x1 00:10:42.800 00:10:42.800 Accel Perf Configuration: 00:10:42.800 Workload Type: dif_generate_copy 00:10:42.800 Vector size: 4096 bytes 00:10:42.800 Transfer size: 4096 bytes 00:10:42.800 Vector count 1 00:10:42.800 Module: software 00:10:42.800 Queue depth: 32 00:10:42.800 Allocate depth: 32 00:10:42.800 # threads/core: 1 00:10:42.800 Run time: 1 seconds 00:10:42.800 Verify: No 00:10:42.800 00:10:42.800 Running for 1 seconds... 00:10:42.800 00:10:42.800 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.800 ------------------------------------------------------------------------------------ 00:10:42.800 0,0 116192/s 460 MiB/s 0 0 00:10:42.800 ==================================================================================== 00:10:42.800 Total 116192/s 453 MiB/s 0 0' 00:10:42.800 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:42.800 11:38:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:42.800 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:42.800 11:38:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.800 11:38:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.800 11:38:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:42.800 11:38:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.800 11:38:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.800 11:38:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.800 11:38:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.800 11:38:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.800 11:38:15 -- accel/accel.sh@42 -- # jq -r . 00:10:42.800 [2024-11-20 11:38:15.628883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:42.800 [2024-11-20 11:38:15.629440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59349 ] 00:10:42.800 [2024-11-20 11:38:15.768099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.059 [2024-11-20 11:38:15.869545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=0x1 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=software 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=32 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=32 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=1 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val=No 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:43.059 11:38:15 -- accel/accel.sh@21 -- # val= 00:10:43.059 11:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # IFS=: 00:10:43.059 11:38:15 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 11:38:17 -- accel/accel.sh@21 -- # val= 00:10:44.440 11:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # IFS=: 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 11:38:17 -- accel/accel.sh@21 -- # val= 00:10:44.440 11:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # IFS=: 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 11:38:17 -- accel/accel.sh@21 -- # val= 00:10:44.440 11:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # IFS=: 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 11:38:17 -- accel/accel.sh@21 -- # val= 00:10:44.440 11:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # IFS=: 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 11:38:17 -- accel/accel.sh@21 -- # val= 00:10:44.440 11:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # IFS=: 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 ************************************ 00:10:44.440 END TEST accel_dif_generate_copy 00:10:44.440 ************************************ 00:10:44.440 11:38:17 -- accel/accel.sh@21 -- # val= 00:10:44.440 11:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # IFS=: 00:10:44.440 11:38:17 -- accel/accel.sh@20 -- # read -r var val 00:10:44.440 11:38:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:44.440 11:38:17 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:44.440 11:38:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.440 00:10:44.440 real 0m2.958s 00:10:44.440 user 0m1.302s 00:10:44.440 sys 0m0.090s 00:10:44.440 11:38:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.440 11:38:17 -- common/autotest_common.sh@10 -- # set +x 00:10:44.440 11:38:17 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:44.440 11:38:17 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.440 11:38:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:44.440 11:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.440 11:38:17 -- common/autotest_common.sh@10 -- # set +x 00:10:44.440 ************************************ 00:10:44.440 START TEST accel_comp 00:10:44.440 ************************************ 00:10:44.440 11:38:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.441 11:38:17 -- accel/accel.sh@16 -- # local accel_opc 00:10:44.441 11:38:17 -- accel/accel.sh@17 -- # local accel_module 00:10:44.441 11:38:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.441 11:38:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.441 11:38:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.441 11:38:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.441 11:38:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.441 11:38:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.441 11:38:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.441 11:38:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.441 11:38:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.441 11:38:17 -- accel/accel.sh@42 -- # jq -r . 00:10:44.441 [2024-11-20 11:38:17.157126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:44.441 [2024-11-20 11:38:17.157748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59378 ] 00:10:44.441 [2024-11-20 11:38:17.289906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.441 [2024-11-20 11:38:17.383634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.819 11:38:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:45.819 00:10:45.819 SPDK Configuration: 00:10:45.819 Core mask: 0x1 00:10:45.819 00:10:45.819 Accel Perf Configuration: 00:10:45.819 Workload Type: compress 00:10:45.819 Transfer size: 4096 bytes 00:10:45.819 Vector count 1 00:10:45.819 Module: software 00:10:45.819 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:45.819 Queue depth: 32 00:10:45.819 Allocate depth: 32 00:10:45.819 # threads/core: 1 00:10:45.819 Run time: 1 seconds 00:10:45.819 Verify: No 00:10:45.819 00:10:45.819 Running for 1 seconds... 00:10:45.819 00:10:45.819 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:45.819 ------------------------------------------------------------------------------------ 00:10:45.819 0,0 47456/s 197 MiB/s 0 0 00:10:45.819 ==================================================================================== 00:10:45.819 Total 47456/s 185 MiB/s 0 0' 00:10:45.819 11:38:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:45.819 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:45.819 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:45.819 11:38:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:45.819 11:38:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.819 11:38:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.819 11:38:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.819 11:38:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.819 11:38:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.819 11:38:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.819 11:38:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.819 11:38:18 -- accel/accel.sh@42 -- # jq -r . 00:10:45.819 [2024-11-20 11:38:18.612121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:45.819 [2024-11-20 11:38:18.612236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:10:45.820 [2024-11-20 11:38:18.750817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.820 [2024-11-20 11:38:18.852326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.078 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.078 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=0x1 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=compress 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=software 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@23 -- # accel_module=software 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=32 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=32 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=1 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val=No 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:46.079 11:38:18 -- accel/accel.sh@21 -- # val= 00:10:46.079 11:38:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # IFS=: 00:10:46.079 11:38:18 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@21 -- # val= 00:10:47.459 11:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # IFS=: 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@21 -- # val= 00:10:47.459 11:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # IFS=: 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@21 -- # val= 00:10:47.459 11:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # IFS=: 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@21 -- # val= 00:10:47.459 11:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # IFS=: 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@21 -- # val= 00:10:47.459 11:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # IFS=: 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@21 -- # val= 00:10:47.459 11:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # IFS=: 00:10:47.459 11:38:20 -- accel/accel.sh@20 -- # read -r var val 00:10:47.459 11:38:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.459 11:38:20 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:47.459 11:38:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.459 00:10:47.459 real 0m2.951s 00:10:47.459 user 0m2.567s 00:10:47.459 sys 0m0.183s 00:10:47.459 11:38:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.459 ************************************ 00:10:47.459 END TEST accel_comp 00:10:47.459 ************************************ 00:10:47.459 11:38:20 -- common/autotest_common.sh@10 -- # set +x 00:10:47.459 11:38:20 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.459 11:38:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:47.459 11:38:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.459 11:38:20 -- common/autotest_common.sh@10 -- # set +x 00:10:47.459 ************************************ 00:10:47.459 START TEST accel_decomp 00:10:47.459 ************************************ 00:10:47.459 11:38:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.459 11:38:20 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.459 11:38:20 -- accel/accel.sh@17 -- # local accel_module 00:10:47.459 11:38:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.459 11:38:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.459 11:38:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.459 11:38:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.459 11:38:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.459 11:38:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.460 11:38:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.460 11:38:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.460 11:38:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.460 11:38:20 -- accel/accel.sh@42 -- # jq -r . 00:10:47.460 [2024-11-20 11:38:20.161562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:47.460 [2024-11-20 11:38:20.161719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:10:47.460 [2024-11-20 11:38:20.293231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.460 [2024-11-20 11:38:20.392601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.840 11:38:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:48.840 00:10:48.840 SPDK Configuration: 00:10:48.840 Core mask: 0x1 00:10:48.840 00:10:48.840 Accel Perf Configuration: 00:10:48.840 Workload Type: decompress 00:10:48.840 Transfer size: 4096 bytes 00:10:48.840 Vector count 1 00:10:48.840 Module: software 00:10:48.840 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:48.840 Queue depth: 32 00:10:48.840 Allocate depth: 32 00:10:48.840 # threads/core: 1 00:10:48.840 Run time: 1 seconds 00:10:48.840 Verify: Yes 00:10:48.840 00:10:48.840 Running for 1 seconds... 00:10:48.840 00:10:48.840 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.840 ------------------------------------------------------------------------------------ 00:10:48.840 0,0 56256/s 103 MiB/s 0 0 00:10:48.840 ==================================================================================== 00:10:48.840 Total 56256/s 219 MiB/s 0 0' 00:10:48.840 11:38:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.840 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.840 11:38:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:48.840 11:38:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.840 11:38:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.840 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.840 11:38:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.840 11:38:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.840 11:38:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.840 11:38:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.840 11:38:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.840 11:38:21 -- accel/accel.sh@42 -- # jq -r . 00:10:48.840 [2024-11-20 11:38:21.622570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.840 [2024-11-20 11:38:21.622722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59448 ] 00:10:48.840 [2024-11-20 11:38:21.758953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.840 [2024-11-20 11:38:21.864298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val=0x1 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val=decompress 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.100 11:38:21 -- accel/accel.sh@21 -- # val=software 00:10:49.100 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.100 11:38:21 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.100 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val=32 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val=32 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val=1 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val=Yes 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:49.101 11:38:21 -- accel/accel.sh@21 -- # val= 00:10:49.101 11:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # IFS=: 00:10:49.101 11:38:21 -- accel/accel.sh@20 -- # read -r var val 00:10:50.040 11:38:23 -- accel/accel.sh@21 -- # val= 00:10:50.040 11:38:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.040 11:38:23 -- accel/accel.sh@20 -- # IFS=: 00:10:50.040 11:38:23 -- accel/accel.sh@20 -- # read -r var val 00:10:50.040 11:38:23 -- accel/accel.sh@21 -- # val= 00:10:50.040 11:38:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.040 11:38:23 -- accel/accel.sh@20 -- # IFS=: 00:10:50.040 11:38:23 -- accel/accel.sh@20 -- # read -r var val 00:10:50.040 11:38:23 -- accel/accel.sh@21 -- # val= 00:10:50.040 11:38:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.040 11:38:23 -- accel/accel.sh@20 -- # IFS=: 00:10:50.040 11:38:23 -- accel/accel.sh@20 -- # read -r var val 00:10:50.299 11:38:23 -- accel/accel.sh@21 -- # val= 00:10:50.299 11:38:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.299 11:38:23 -- accel/accel.sh@20 -- # IFS=: 00:10:50.299 11:38:23 -- accel/accel.sh@20 -- # read -r var val 00:10:50.299 11:38:23 -- accel/accel.sh@21 -- # val= 00:10:50.299 11:38:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.299 11:38:23 -- accel/accel.sh@20 -- # IFS=: 00:10:50.299 11:38:23 -- accel/accel.sh@20 -- # read -r var val 00:10:50.299 11:38:23 -- accel/accel.sh@21 -- # val= 00:10:50.299 11:38:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.299 11:38:23 -- accel/accel.sh@20 -- # IFS=: 00:10:50.299 11:38:23 -- accel/accel.sh@20 -- # read -r var val 00:10:50.299 11:38:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.299 11:38:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:50.299 11:38:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.299 00:10:50.299 real 0m2.958s 00:10:50.299 user 0m2.577s 00:10:50.299 sys 0m0.182s 00:10:50.299 11:38:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.299 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:10:50.299 ************************************ 00:10:50.299 END TEST accel_decomp 00:10:50.299 ************************************ 00:10:50.299 11:38:23 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:50.299 11:38:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:50.299 11:38:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.299 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:10:50.299 ************************************ 00:10:50.299 START TEST accel_decmop_full 00:10:50.299 ************************************ 00:10:50.299 11:38:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:50.299 11:38:23 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.299 11:38:23 -- accel/accel.sh@17 -- # local accel_module 00:10:50.299 11:38:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:50.299 11:38:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:50.299 11:38:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.299 11:38:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.299 11:38:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.299 11:38:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.299 11:38:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.299 11:38:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.299 11:38:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.299 11:38:23 -- accel/accel.sh@42 -- # jq -r . 00:10:50.299 [2024-11-20 11:38:23.169031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:50.299 [2024-11-20 11:38:23.169203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59488 ] 00:10:50.299 [2024-11-20 11:38:23.307747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.559 [2024-11-20 11:38:23.412856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.942 11:38:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:51.942 00:10:51.942 SPDK Configuration: 00:10:51.942 Core mask: 0x1 00:10:51.942 00:10:51.942 Accel Perf Configuration: 00:10:51.942 Workload Type: decompress 00:10:51.942 Transfer size: 111250 bytes 00:10:51.942 Vector count 1 00:10:51.942 Module: software 00:10:51.942 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:51.942 Queue depth: 32 00:10:51.942 Allocate depth: 32 00:10:51.942 # threads/core: 1 00:10:51.942 Run time: 1 seconds 00:10:51.942 Verify: Yes 00:10:51.942 00:10:51.942 Running for 1 seconds... 00:10:51.942 00:10:51.942 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:51.942 ------------------------------------------------------------------------------------ 00:10:51.942 0,0 3488/s 144 MiB/s 0 0 00:10:51.942 ==================================================================================== 00:10:51.942 Total 3488/s 370 MiB/s 0 0' 00:10:51.942 11:38:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:51.942 11:38:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.942 11:38:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.942 11:38:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.942 11:38:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.942 11:38:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.942 11:38:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.942 11:38:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.942 11:38:24 -- accel/accel.sh@42 -- # jq -r . 00:10:51.942 [2024-11-20 11:38:24.656980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.942 [2024-11-20 11:38:24.657099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:10:51.942 [2024-11-20 11:38:24.796835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.942 [2024-11-20 11:38:24.900253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val=0x1 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val=decompress 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.942 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.942 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.942 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val=software 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@23 -- # accel_module=software 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val=32 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val=32 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val=1 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val=Yes 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.943 11:38:24 -- accel/accel.sh@21 -- # val= 00:10:51.943 11:38:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.943 11:38:24 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 11:38:26 -- accel/accel.sh@21 -- # val= 00:10:53.321 11:38:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 11:38:26 -- accel/accel.sh@21 -- # val= 00:10:53.321 11:38:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 11:38:26 -- accel/accel.sh@21 -- # val= 00:10:53.321 11:38:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 11:38:26 -- accel/accel.sh@21 -- # val= 00:10:53.321 11:38:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 11:38:26 -- accel/accel.sh@21 -- # val= 00:10:53.321 11:38:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 11:38:26 -- accel/accel.sh@21 -- # val= 00:10:53.321 11:38:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.321 11:38:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.321 ************************************ 00:10:53.321 END TEST accel_decmop_full 00:10:53.321 ************************************ 00:10:53.321 11:38:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:53.321 11:38:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:53.321 11:38:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.321 00:10:53.321 real 0m2.995s 00:10:53.321 user 0m2.605s 00:10:53.321 sys 0m0.187s 00:10:53.321 11:38:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.321 11:38:26 -- common/autotest_common.sh@10 -- # set +x 00:10:53.321 11:38:26 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.321 11:38:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:53.321 11:38:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.321 11:38:26 -- common/autotest_common.sh@10 -- # set +x 00:10:53.321 ************************************ 00:10:53.321 START TEST accel_decomp_mcore 00:10:53.321 ************************************ 00:10:53.321 11:38:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.321 11:38:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:53.321 11:38:26 -- accel/accel.sh@17 -- # local accel_module 00:10:53.321 11:38:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.321 11:38:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.321 11:38:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.321 11:38:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.321 11:38:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.321 11:38:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.321 11:38:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.321 11:38:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.321 11:38:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.321 11:38:26 -- accel/accel.sh@42 -- # jq -r . 00:10:53.321 [2024-11-20 11:38:26.220386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:53.321 [2024-11-20 11:38:26.220545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59542 ] 00:10:53.321 [2024-11-20 11:38:26.356713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.581 [2024-11-20 11:38:26.453068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.581 [2024-11-20 11:38:26.453287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.581 [2024-11-20 11:38:26.453257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.581 [2024-11-20 11:38:26.453290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.959 11:38:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:54.959 00:10:54.959 SPDK Configuration: 00:10:54.959 Core mask: 0xf 00:10:54.959 00:10:54.959 Accel Perf Configuration: 00:10:54.959 Workload Type: decompress 00:10:54.959 Transfer size: 4096 bytes 00:10:54.959 Vector count 1 00:10:54.959 Module: software 00:10:54.959 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.959 Queue depth: 32 00:10:54.959 Allocate depth: 32 00:10:54.959 # threads/core: 1 00:10:54.959 Run time: 1 seconds 00:10:54.959 Verify: Yes 00:10:54.959 00:10:54.959 Running for 1 seconds... 00:10:54.959 00:10:54.959 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:54.959 ------------------------------------------------------------------------------------ 00:10:54.959 0,0 48288/s 89 MiB/s 0 0 00:10:54.959 3,0 52768/s 97 MiB/s 0 0 00:10:54.959 2,0 56576/s 104 MiB/s 0 0 00:10:54.959 1,0 56608/s 104 MiB/s 0 0 00:10:54.959 ==================================================================================== 00:10:54.959 Total 214240/s 836 MiB/s 0 0' 00:10:54.959 11:38:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:54.959 11:38:27 -- accel/accel.sh@20 -- # IFS=: 00:10:54.959 11:38:27 -- accel/accel.sh@20 -- # read -r var val 00:10:54.959 11:38:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:54.959 11:38:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.959 11:38:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.959 11:38:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.959 11:38:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.959 11:38:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.959 11:38:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.959 11:38:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.959 11:38:27 -- accel/accel.sh@42 -- # jq -r . 00:10:54.959 [2024-11-20 11:38:27.718322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:54.959 [2024-11-20 11:38:27.718459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59559 ] 00:10:54.959 [2024-11-20 11:38:27.850142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.959 [2024-11-20 11:38:27.954202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.959 [2024-11-20 11:38:27.954566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.959 [2024-11-20 11:38:27.954398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.959 [2024-11-20 11:38:27.954571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=0xf 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=decompress 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=software 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=32 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=32 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=1 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val=Yes 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:55.217 11:38:28 -- accel/accel.sh@21 -- # val= 00:10:55.217 11:38:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # IFS=: 00:10:55.217 11:38:28 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@21 -- # val= 00:10:56.154 11:38:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.154 11:38:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.154 11:38:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:56.154 11:38:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:56.154 11:38:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.154 00:10:56.154 real 0m2.993s 00:10:56.154 user 0m9.313s 00:10:56.154 sys 0m0.209s 00:10:56.154 ************************************ 00:10:56.154 END TEST accel_decomp_mcore 00:10:56.154 ************************************ 00:10:56.154 11:38:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:56.154 11:38:29 -- common/autotest_common.sh@10 -- # set +x 00:10:56.415 11:38:29 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:56.415 11:38:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:56.415 11:38:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.415 11:38:29 -- common/autotest_common.sh@10 -- # set +x 00:10:56.415 ************************************ 00:10:56.415 START TEST accel_decomp_full_mcore 00:10:56.415 ************************************ 00:10:56.415 11:38:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:56.415 11:38:29 -- accel/accel.sh@16 -- # local accel_opc 00:10:56.415 11:38:29 -- accel/accel.sh@17 -- # local accel_module 00:10:56.415 11:38:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:56.415 11:38:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.415 11:38:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:56.415 11:38:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.415 11:38:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.415 11:38:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.415 11:38:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.415 11:38:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.415 11:38:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.415 11:38:29 -- accel/accel.sh@42 -- # jq -r . 00:10:56.415 [2024-11-20 11:38:29.268326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:56.415 [2024-11-20 11:38:29.268414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:10:56.415 [2024-11-20 11:38:29.407255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.675 [2024-11-20 11:38:29.506783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.675 [2024-11-20 11:38:29.507080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.675 [2024-11-20 11:38:29.506976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.675 [2024-11-20 11:38:29.507070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.058 11:38:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:58.058 00:10:58.058 SPDK Configuration: 00:10:58.058 Core mask: 0xf 00:10:58.058 00:10:58.058 Accel Perf Configuration: 00:10:58.058 Workload Type: decompress 00:10:58.058 Transfer size: 111250 bytes 00:10:58.058 Vector count 1 00:10:58.058 Module: software 00:10:58.058 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:58.058 Queue depth: 32 00:10:58.058 Allocate depth: 32 00:10:58.058 # threads/core: 1 00:10:58.058 Run time: 1 seconds 00:10:58.058 Verify: Yes 00:10:58.058 00:10:58.058 Running for 1 seconds... 00:10:58.058 00:10:58.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.058 ------------------------------------------------------------------------------------ 00:10:58.058 0,0 3392/s 140 MiB/s 0 0 00:10:58.058 3,0 3968/s 163 MiB/s 0 0 00:10:58.058 2,0 3872/s 159 MiB/s 0 0 00:10:58.058 1,0 3936/s 162 MiB/s 0 0 00:10:58.058 ==================================================================================== 00:10:58.058 Total 15168/s 1609 MiB/s 0 0' 00:10:58.058 11:38:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:58.058 11:38:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:58.058 11:38:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.058 11:38:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.058 11:38:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.058 11:38:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.058 11:38:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.058 11:38:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.058 11:38:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.058 11:38:30 -- accel/accel.sh@42 -- # jq -r . 00:10:58.058 [2024-11-20 11:38:30.766018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:58.058 [2024-11-20 11:38:30.766725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59619 ] 00:10:58.058 [2024-11-20 11:38:30.898179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.058 [2024-11-20 11:38:31.025859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.058 [2024-11-20 11:38:31.026081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.058 [2024-11-20 11:38:31.026036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.058 [2024-11-20 11:38:31.026142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=0xf 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=decompress 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=software 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@23 -- # accel_module=software 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=32 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=32 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=1 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val=Yes 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:58.058 11:38:31 -- accel/accel.sh@21 -- # val= 00:10:58.058 11:38:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # IFS=: 00:10:58.058 11:38:31 -- accel/accel.sh@20 -- # read -r var val 00:10:59.437 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 11:38:32 -- accel/accel.sh@21 -- # val= 00:10:59.438 11:38:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # IFS=: 00:10:59.438 11:38:32 -- accel/accel.sh@20 -- # read -r var val 00:10:59.438 ************************************ 00:10:59.438 END TEST accel_decomp_full_mcore 00:10:59.438 ************************************ 00:10:59.438 11:38:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:59.438 11:38:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:59.438 11:38:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:59.438 00:10:59.438 real 0m3.026s 00:10:59.438 user 0m9.335s 00:10:59.438 sys 0m0.227s 00:10:59.438 11:38:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.438 11:38:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.438 11:38:32 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:59.438 11:38:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:59.438 11:38:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.438 11:38:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.438 ************************************ 00:10:59.438 START TEST accel_decomp_mthread 00:10:59.438 ************************************ 00:10:59.438 11:38:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:59.438 11:38:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:59.438 11:38:32 -- accel/accel.sh@17 -- # local accel_module 00:10:59.438 11:38:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:59.438 11:38:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:59.438 11:38:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.438 11:38:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.438 11:38:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.438 11:38:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.438 11:38:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.438 11:38:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.438 11:38:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.438 11:38:32 -- accel/accel.sh@42 -- # jq -r . 00:10:59.438 [2024-11-20 11:38:32.344339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:59.438 [2024-11-20 11:38:32.344501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59657 ] 00:10:59.714 [2024-11-20 11:38:32.483408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.714 [2024-11-20 11:38:32.580075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.093 11:38:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:01.093 00:11:01.093 SPDK Configuration: 00:11:01.093 Core mask: 0x1 00:11:01.093 00:11:01.093 Accel Perf Configuration: 00:11:01.093 Workload Type: decompress 00:11:01.093 Transfer size: 4096 bytes 00:11:01.093 Vector count 1 00:11:01.093 Module: software 00:11:01.093 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:01.093 Queue depth: 32 00:11:01.093 Allocate depth: 32 00:11:01.093 # threads/core: 2 00:11:01.093 Run time: 1 seconds 00:11:01.093 Verify: Yes 00:11:01.093 00:11:01.093 Running for 1 seconds... 00:11:01.093 00:11:01.093 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:01.093 ------------------------------------------------------------------------------------ 00:11:01.093 0,1 25920/s 47 MiB/s 0 0 00:11:01.093 0,0 25792/s 47 MiB/s 0 0 00:11:01.093 ==================================================================================== 00:11:01.093 Total 51712/s 202 MiB/s 0 0' 00:11:01.093 11:38:33 -- accel/accel.sh@20 -- # IFS=: 00:11:01.093 11:38:33 -- accel/accel.sh@20 -- # read -r var val 00:11:01.093 11:38:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:01.093 11:38:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.093 11:38:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.093 11:38:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.093 11:38:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:01.093 11:38:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.093 11:38:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.093 11:38:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.093 11:38:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.093 11:38:33 -- accel/accel.sh@42 -- # jq -r . 00:11:01.093 [2024-11-20 11:38:33.835562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:01.093 [2024-11-20 11:38:33.836085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59676 ] 00:11:01.093 [2024-11-20 11:38:33.976611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.093 [2024-11-20 11:38:34.080415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.093 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.093 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.093 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.093 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.093 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.093 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.093 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.093 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.093 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val=0x1 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val=decompress 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val=software 00:11:01.353 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.353 11:38:34 -- accel/accel.sh@23 -- # accel_module=software 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.353 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.353 11:38:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val=32 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val=32 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val=2 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val=Yes 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.354 11:38:34 -- accel/accel.sh@21 -- # val= 00:11:01.354 11:38:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # IFS=: 00:11:01.354 11:38:34 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@21 -- # val= 00:11:02.293 11:38:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # IFS=: 00:11:02.293 11:38:35 -- accel/accel.sh@20 -- # read -r var val 00:11:02.293 11:38:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:02.293 11:38:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:02.293 11:38:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.293 00:11:02.293 real 0m2.991s 00:11:02.293 user 0m2.592s 00:11:02.293 sys 0m0.198s 00:11:02.293 11:38:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.293 11:38:35 -- common/autotest_common.sh@10 -- # set +x 00:11:02.293 ************************************ 00:11:02.293 END TEST accel_decomp_mthread 00:11:02.293 ************************************ 00:11:02.552 11:38:35 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.552 11:38:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:02.552 11:38:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.552 11:38:35 -- common/autotest_common.sh@10 -- # set +x 00:11:02.552 ************************************ 00:11:02.552 START TEST accel_deomp_full_mthread 00:11:02.552 ************************************ 00:11:02.552 11:38:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.552 11:38:35 -- accel/accel.sh@16 -- # local accel_opc 00:11:02.552 11:38:35 -- accel/accel.sh@17 -- # local accel_module 00:11:02.552 11:38:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.552 11:38:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.552 11:38:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.552 11:38:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.552 11:38:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.552 11:38:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.552 11:38:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.552 11:38:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.552 11:38:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.552 11:38:35 -- accel/accel.sh@42 -- # jq -r . 00:11:02.552 [2024-11-20 11:38:35.389963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:02.552 [2024-11-20 11:38:35.390053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:11:02.552 [2024-11-20 11:38:35.528142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.812 [2024-11-20 11:38:35.630137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.189 11:38:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:04.189 00:11:04.189 SPDK Configuration: 00:11:04.189 Core mask: 0x1 00:11:04.189 00:11:04.189 Accel Perf Configuration: 00:11:04.189 Workload Type: decompress 00:11:04.189 Transfer size: 111250 bytes 00:11:04.189 Vector count 1 00:11:04.189 Module: software 00:11:04.189 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.189 Queue depth: 32 00:11:04.189 Allocate depth: 32 00:11:04.189 # threads/core: 2 00:11:04.189 Run time: 1 seconds 00:11:04.189 Verify: Yes 00:11:04.189 00:11:04.189 Running for 1 seconds... 00:11:04.189 00:11:04.189 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:04.189 ------------------------------------------------------------------------------------ 00:11:04.189 0,1 1760/s 72 MiB/s 0 0 00:11:04.189 0,0 1696/s 70 MiB/s 0 0 00:11:04.189 ==================================================================================== 00:11:04.189 Total 3456/s 366 MiB/s 0 0' 00:11:04.189 11:38:36 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:36 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:04.189 11:38:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:04.189 11:38:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:04.189 11:38:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:04.189 11:38:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.189 11:38:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.189 11:38:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:04.189 11:38:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:04.189 11:38:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:04.189 11:38:36 -- accel/accel.sh@42 -- # jq -r . 00:11:04.189 [2024-11-20 11:38:36.916193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:04.189 [2024-11-20 11:38:36.916274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:11:04.189 [2024-11-20 11:38:37.055181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.189 [2024-11-20 11:38:37.153927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val=0x1 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val=decompress 00:11:04.189 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.189 11:38:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.189 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.189 11:38:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val=software 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@23 -- # accel_module=software 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val=32 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val=32 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val=2 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val=Yes 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:04.190 11:38:37 -- accel/accel.sh@21 -- # val= 00:11:04.190 11:38:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # IFS=: 00:11:04.190 11:38:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@21 -- # val= 00:11:05.564 11:38:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # IFS=: 00:11:05.564 11:38:38 -- accel/accel.sh@20 -- # read -r var val 00:11:05.564 11:38:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.564 11:38:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:05.564 11:38:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.564 00:11:05.564 real 0m3.045s 00:11:05.564 user 0m2.644s 00:11:05.564 sys 0m0.193s 00:11:05.564 11:38:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.564 11:38:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.564 ************************************ 00:11:05.564 END TEST accel_deomp_full_mthread 00:11:05.564 ************************************ 00:11:05.564 11:38:38 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:05.564 11:38:38 -- accel/accel.sh@129 -- # build_accel_config 00:11:05.564 11:38:38 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:05.564 11:38:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.564 11:38:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.564 11:38:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.564 11:38:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.564 11:38:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.564 11:38:38 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.564 11:38:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:05.564 11:38:38 -- accel/accel.sh@42 -- # jq -r . 00:11:05.564 11:38:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.564 11:38:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.564 ************************************ 00:11:05.564 START TEST accel_dif_functional_tests 00:11:05.564 ************************************ 00:11:05.564 11:38:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:05.564 [2024-11-20 11:38:38.514755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.564 [2024-11-20 11:38:38.514892] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59769 ] 00:11:05.822 [2024-11-20 11:38:38.639514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.822 [2024-11-20 11:38:38.761368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.822 [2024-11-20 11:38:38.761913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.822 [2024-11-20 11:38:38.761914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.822 00:11:05.822 00:11:05.822 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.822 http://cunit.sourceforge.net/ 00:11:05.822 00:11:05.822 00:11:05.822 Suite: accel_dif 00:11:05.822 Test: verify: DIF generated, GUARD check ...passed 00:11:05.822 Test: verify: DIF generated, APPTAG check ...passed 00:11:05.822 Test: verify: DIF generated, REFTAG check ...passed 00:11:05.822 Test: verify: DIF not generated, GUARD check ...passed 00:11:05.822 Test: verify: DIF not generated, APPTAG check ...[2024-11-20 11:38:38.839137] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:05.822 [2024-11-20 11:38:38.839259] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:05.822 passed 00:11:05.822 Test: verify: DIF not generated, REFTAG check ...[2024-11-20 11:38:38.839314] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:05.822 [2024-11-20 11:38:38.839358] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:05.822 passed 00:11:05.822 Test: verify: APPTAG correct, APPTAG check ...[2024-11-20 11:38:38.839397] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:05.822 [2024-11-20 11:38:38.839434] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:05.822 passed 00:11:05.822 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:11:05.822 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-11-20 11:38:38.839530] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:05.822 passed 00:11:05.822 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:05.822 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:05.822 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-20 11:38:38.839812] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:05.822 passed 00:11:05.822 Test: generate copy: DIF generated, GUARD check ...passed 00:11:05.822 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:05.822 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:05.822 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:05.822 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:05.822 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:05.822 Test: generate copy: iovecs-len validate ...[2024-11-20 11:38:38.840400] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:05.822 passed 00:11:05.822 Test: generate copy: buffer alignment validate ...passed 00:11:05.822 00:11:05.822 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.822 suites 1 1 n/a 0 0 00:11:05.822 tests 20 20 20 0 0 00:11:05.822 asserts 204 204 204 0 n/a 00:11:05.822 00:11:05.822 Elapsed time = 0.003 seconds 00:11:06.085 00:11:06.085 real 0m0.595s 00:11:06.085 user 0m0.730s 00:11:06.085 sys 0m0.136s 00:11:06.085 ************************************ 00:11:06.085 END TEST accel_dif_functional_tests 00:11:06.085 ************************************ 00:11:06.085 11:38:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:06.085 11:38:39 -- common/autotest_common.sh@10 -- # set +x 00:11:06.085 ************************************ 00:11:06.085 END TEST accel 00:11:06.085 ************************************ 00:11:06.085 00:11:06.085 real 1m4.132s 00:11:06.085 user 1m8.657s 00:11:06.085 sys 0m5.704s 00:11:06.085 11:38:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:06.085 11:38:39 -- common/autotest_common.sh@10 -- # set +x 00:11:06.349 11:38:39 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:06.349 11:38:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:06.349 11:38:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.349 11:38:39 -- common/autotest_common.sh@10 -- # set +x 00:11:06.349 ************************************ 00:11:06.349 START TEST accel_rpc 00:11:06.349 ************************************ 00:11:06.349 11:38:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:06.349 * Looking for test storage... 00:11:06.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:06.349 11:38:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:06.349 11:38:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:06.349 11:38:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:06.349 11:38:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:06.349 11:38:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:06.349 11:38:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:06.349 11:38:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:06.349 11:38:39 -- scripts/common.sh@335 -- # IFS=.-: 00:11:06.349 11:38:39 -- scripts/common.sh@335 -- # read -ra ver1 00:11:06.349 11:38:39 -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.349 11:38:39 -- scripts/common.sh@336 -- # read -ra ver2 00:11:06.349 11:38:39 -- scripts/common.sh@337 -- # local 'op=<' 00:11:06.349 11:38:39 -- scripts/common.sh@339 -- # ver1_l=2 00:11:06.349 11:38:39 -- scripts/common.sh@340 -- # ver2_l=1 00:11:06.349 11:38:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:06.349 11:38:39 -- scripts/common.sh@343 -- # case "$op" in 00:11:06.349 11:38:39 -- scripts/common.sh@344 -- # : 1 00:11:06.349 11:38:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:06.349 11:38:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.349 11:38:39 -- scripts/common.sh@364 -- # decimal 1 00:11:06.349 11:38:39 -- scripts/common.sh@352 -- # local d=1 00:11:06.349 11:38:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.349 11:38:39 -- scripts/common.sh@354 -- # echo 1 00:11:06.349 11:38:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:06.349 11:38:39 -- scripts/common.sh@365 -- # decimal 2 00:11:06.349 11:38:39 -- scripts/common.sh@352 -- # local d=2 00:11:06.349 11:38:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.349 11:38:39 -- scripts/common.sh@354 -- # echo 2 00:11:06.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.349 11:38:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:06.349 11:38:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:06.349 11:38:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:06.349 11:38:39 -- scripts/common.sh@367 -- # return 0 00:11:06.349 11:38:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.349 11:38:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:06.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.349 --rc genhtml_branch_coverage=1 00:11:06.349 --rc genhtml_function_coverage=1 00:11:06.349 --rc genhtml_legend=1 00:11:06.349 --rc geninfo_all_blocks=1 00:11:06.349 --rc geninfo_unexecuted_blocks=1 00:11:06.349 00:11:06.349 ' 00:11:06.349 11:38:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:06.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.349 --rc genhtml_branch_coverage=1 00:11:06.349 --rc genhtml_function_coverage=1 00:11:06.349 --rc genhtml_legend=1 00:11:06.349 --rc geninfo_all_blocks=1 00:11:06.349 --rc geninfo_unexecuted_blocks=1 00:11:06.349 00:11:06.349 ' 00:11:06.349 11:38:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:06.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.349 --rc genhtml_branch_coverage=1 00:11:06.349 --rc genhtml_function_coverage=1 00:11:06.349 --rc genhtml_legend=1 00:11:06.349 --rc geninfo_all_blocks=1 00:11:06.349 --rc geninfo_unexecuted_blocks=1 00:11:06.349 00:11:06.349 ' 00:11:06.349 11:38:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:06.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.349 --rc genhtml_branch_coverage=1 00:11:06.349 --rc genhtml_function_coverage=1 00:11:06.349 --rc genhtml_legend=1 00:11:06.349 --rc geninfo_all_blocks=1 00:11:06.349 --rc geninfo_unexecuted_blocks=1 00:11:06.349 00:11:06.349 ' 00:11:06.349 11:38:39 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:06.349 11:38:39 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59846 00:11:06.349 11:38:39 -- accel/accel_rpc.sh@15 -- # waitforlisten 59846 00:11:06.349 11:38:39 -- common/autotest_common.sh@829 -- # '[' -z 59846 ']' 00:11:06.349 11:38:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.349 11:38:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.349 11:38:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.349 11:38:39 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:06.349 11:38:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.349 11:38:39 -- common/autotest_common.sh@10 -- # set +x 00:11:06.349 [2024-11-20 11:38:39.372589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.349 [2024-11-20 11:38:39.373135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59846 ] 00:11:06.608 [2024-11-20 11:38:39.511831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.608 [2024-11-20 11:38:39.614771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.608 [2024-11-20 11:38:39.615025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.546 11:38:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.546 11:38:40 -- common/autotest_common.sh@862 -- # return 0 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:07.546 11:38:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:07.546 11:38:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.546 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:07.546 ************************************ 00:11:07.546 START TEST accel_assign_opcode 00:11:07.546 ************************************ 00:11:07.546 11:38:40 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:07.546 11:38:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.546 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:07.546 [2024-11-20 11:38:40.307500] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:07.546 11:38:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:07.546 11:38:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.546 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:07.546 [2024-11-20 11:38:40.315478] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:07.546 11:38:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:07.546 11:38:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.546 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:07.546 11:38:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:07.546 11:38:40 -- accel/accel_rpc.sh@42 -- # grep software 00:11:07.546 11:38:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.546 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:07.546 11:38:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.547 software 00:11:07.547 00:11:07.547 real 0m0.254s 00:11:07.547 user 0m0.039s 00:11:07.547 sys 0m0.012s 00:11:07.547 11:38:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.547 ************************************ 00:11:07.547 END TEST accel_assign_opcode 00:11:07.547 ************************************ 00:11:07.547 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 11:38:40 -- accel/accel_rpc.sh@55 -- # killprocess 59846 00:11:07.807 11:38:40 -- common/autotest_common.sh@936 -- # '[' -z 59846 ']' 00:11:07.807 11:38:40 -- common/autotest_common.sh@940 -- # kill -0 59846 00:11:07.807 11:38:40 -- common/autotest_common.sh@941 -- # uname 00:11:07.807 11:38:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.807 11:38:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59846 00:11:07.807 killing process with pid 59846 00:11:07.807 11:38:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:07.807 11:38:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:07.807 11:38:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59846' 00:11:07.807 11:38:40 -- common/autotest_common.sh@955 -- # kill 59846 00:11:07.807 11:38:40 -- common/autotest_common.sh@960 -- # wait 59846 00:11:08.066 00:11:08.067 real 0m1.850s 00:11:08.067 user 0m1.869s 00:11:08.067 sys 0m0.471s 00:11:08.067 ************************************ 00:11:08.067 END TEST accel_rpc 00:11:08.067 ************************************ 00:11:08.067 11:38:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.067 11:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:08.067 11:38:41 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:08.067 11:38:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:08.067 11:38:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.067 11:38:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.067 ************************************ 00:11:08.067 START TEST app_cmdline 00:11:08.067 ************************************ 00:11:08.067 11:38:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:08.327 * Looking for test storage... 00:11:08.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:08.327 11:38:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:08.327 11:38:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:08.327 11:38:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:08.327 11:38:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:08.327 11:38:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:08.327 11:38:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:08.327 11:38:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:08.327 11:38:41 -- scripts/common.sh@335 -- # IFS=.-: 00:11:08.327 11:38:41 -- scripts/common.sh@335 -- # read -ra ver1 00:11:08.327 11:38:41 -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.327 11:38:41 -- scripts/common.sh@336 -- # read -ra ver2 00:11:08.327 11:38:41 -- scripts/common.sh@337 -- # local 'op=<' 00:11:08.327 11:38:41 -- scripts/common.sh@339 -- # ver1_l=2 00:11:08.327 11:38:41 -- scripts/common.sh@340 -- # ver2_l=1 00:11:08.327 11:38:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:08.327 11:38:41 -- scripts/common.sh@343 -- # case "$op" in 00:11:08.327 11:38:41 -- scripts/common.sh@344 -- # : 1 00:11:08.327 11:38:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:08.327 11:38:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.327 11:38:41 -- scripts/common.sh@364 -- # decimal 1 00:11:08.327 11:38:41 -- scripts/common.sh@352 -- # local d=1 00:11:08.327 11:38:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.327 11:38:41 -- scripts/common.sh@354 -- # echo 1 00:11:08.327 11:38:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:08.327 11:38:41 -- scripts/common.sh@365 -- # decimal 2 00:11:08.327 11:38:41 -- scripts/common.sh@352 -- # local d=2 00:11:08.327 11:38:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.327 11:38:41 -- scripts/common.sh@354 -- # echo 2 00:11:08.327 11:38:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:08.327 11:38:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:08.327 11:38:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:08.327 11:38:41 -- scripts/common.sh@367 -- # return 0 00:11:08.327 11:38:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.327 11:38:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.327 --rc genhtml_branch_coverage=1 00:11:08.327 --rc genhtml_function_coverage=1 00:11:08.327 --rc genhtml_legend=1 00:11:08.327 --rc geninfo_all_blocks=1 00:11:08.327 --rc geninfo_unexecuted_blocks=1 00:11:08.327 00:11:08.327 ' 00:11:08.327 11:38:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.327 --rc genhtml_branch_coverage=1 00:11:08.327 --rc genhtml_function_coverage=1 00:11:08.327 --rc genhtml_legend=1 00:11:08.327 --rc geninfo_all_blocks=1 00:11:08.327 --rc geninfo_unexecuted_blocks=1 00:11:08.327 00:11:08.327 ' 00:11:08.327 11:38:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.327 --rc genhtml_branch_coverage=1 00:11:08.327 --rc genhtml_function_coverage=1 00:11:08.327 --rc genhtml_legend=1 00:11:08.327 --rc geninfo_all_blocks=1 00:11:08.327 --rc geninfo_unexecuted_blocks=1 00:11:08.327 00:11:08.327 ' 00:11:08.327 11:38:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.327 --rc genhtml_branch_coverage=1 00:11:08.327 --rc genhtml_function_coverage=1 00:11:08.327 --rc genhtml_legend=1 00:11:08.327 --rc geninfo_all_blocks=1 00:11:08.327 --rc geninfo_unexecuted_blocks=1 00:11:08.327 00:11:08.327 ' 00:11:08.327 11:38:41 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:08.327 11:38:41 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59965 00:11:08.327 11:38:41 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:08.327 11:38:41 -- app/cmdline.sh@18 -- # waitforlisten 59965 00:11:08.327 11:38:41 -- common/autotest_common.sh@829 -- # '[' -z 59965 ']' 00:11:08.327 11:38:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.327 11:38:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.327 11:38:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.327 11:38:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.327 11:38:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.327 [2024-11-20 11:38:41.309356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.327 [2024-11-20 11:38:41.309500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59965 ] 00:11:08.586 [2024-11-20 11:38:41.448198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.586 [2024-11-20 11:38:41.550029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.586 [2024-11-20 11:38:41.550263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.544 11:38:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.544 11:38:42 -- common/autotest_common.sh@862 -- # return 0 00:11:09.544 11:38:42 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:09.544 { 00:11:09.544 "fields": { 00:11:09.544 "commit": "c13c99a5e", 00:11:09.544 "major": 24, 00:11:09.544 "minor": 1, 00:11:09.544 "patch": 1, 00:11:09.544 "suffix": "-pre" 00:11:09.544 }, 00:11:09.544 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:11:09.544 } 00:11:09.544 11:38:42 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:09.544 11:38:42 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:09.544 11:38:42 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:09.544 11:38:42 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:09.544 11:38:42 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:09.544 11:38:42 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:09.544 11:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.544 11:38:42 -- app/cmdline.sh@26 -- # sort 00:11:09.544 11:38:42 -- common/autotest_common.sh@10 -- # set +x 00:11:09.544 11:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.544 11:38:42 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:09.544 11:38:42 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:09.544 11:38:42 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.544 11:38:42 -- common/autotest_common.sh@650 -- # local es=0 00:11:09.544 11:38:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.544 11:38:42 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.544 11:38:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.544 11:38:42 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.544 11:38:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.544 11:38:42 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.544 11:38:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.544 11:38:42 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.544 11:38:42 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:09.544 11:38:42 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.802 2024/11/20 11:38:42 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:11:09.802 request: 00:11:09.802 { 00:11:09.802 "method": "env_dpdk_get_mem_stats", 00:11:09.802 "params": {} 00:11:09.802 } 00:11:09.802 Got JSON-RPC error response 00:11:09.802 GoRPCClient: error on JSON-RPC call 00:11:09.802 11:38:42 -- common/autotest_common.sh@653 -- # es=1 00:11:09.802 11:38:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:09.802 11:38:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:09.802 11:38:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:09.802 11:38:42 -- app/cmdline.sh@1 -- # killprocess 59965 00:11:09.802 11:38:42 -- common/autotest_common.sh@936 -- # '[' -z 59965 ']' 00:11:09.802 11:38:42 -- common/autotest_common.sh@940 -- # kill -0 59965 00:11:09.802 11:38:42 -- common/autotest_common.sh@941 -- # uname 00:11:09.802 11:38:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:09.802 11:38:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59965 00:11:09.802 11:38:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:09.802 11:38:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:09.802 killing process with pid 59965 00:11:09.802 11:38:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59965' 00:11:09.802 11:38:42 -- common/autotest_common.sh@955 -- # kill 59965 00:11:09.802 11:38:42 -- common/autotest_common.sh@960 -- # wait 59965 00:11:10.370 00:11:10.370 real 0m2.136s 00:11:10.370 user 0m2.616s 00:11:10.370 sys 0m0.458s 00:11:10.370 11:38:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.370 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.370 ************************************ 00:11:10.370 END TEST app_cmdline 00:11:10.370 ************************************ 00:11:10.370 11:38:43 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:10.370 11:38:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.370 11:38:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.370 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.370 ************************************ 00:11:10.370 START TEST version 00:11:10.370 ************************************ 00:11:10.370 11:38:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:10.370 * Looking for test storage... 00:11:10.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:10.370 11:38:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.370 11:38:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:10.370 11:38:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:10.370 11:38:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:10.370 11:38:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:10.370 11:38:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:10.370 11:38:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:10.370 11:38:43 -- scripts/common.sh@335 -- # IFS=.-: 00:11:10.370 11:38:43 -- scripts/common.sh@335 -- # read -ra ver1 00:11:10.370 11:38:43 -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.370 11:38:43 -- scripts/common.sh@336 -- # read -ra ver2 00:11:10.370 11:38:43 -- scripts/common.sh@337 -- # local 'op=<' 00:11:10.370 11:38:43 -- scripts/common.sh@339 -- # ver1_l=2 00:11:10.370 11:38:43 -- scripts/common.sh@340 -- # ver2_l=1 00:11:10.370 11:38:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:10.370 11:38:43 -- scripts/common.sh@343 -- # case "$op" in 00:11:10.370 11:38:43 -- scripts/common.sh@344 -- # : 1 00:11:10.370 11:38:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:10.370 11:38:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.370 11:38:43 -- scripts/common.sh@364 -- # decimal 1 00:11:10.370 11:38:43 -- scripts/common.sh@352 -- # local d=1 00:11:10.370 11:38:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.370 11:38:43 -- scripts/common.sh@354 -- # echo 1 00:11:10.370 11:38:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:10.370 11:38:43 -- scripts/common.sh@365 -- # decimal 2 00:11:10.370 11:38:43 -- scripts/common.sh@352 -- # local d=2 00:11:10.370 11:38:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.370 11:38:43 -- scripts/common.sh@354 -- # echo 2 00:11:10.370 11:38:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:10.370 11:38:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:10.370 11:38:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:10.370 11:38:43 -- scripts/common.sh@367 -- # return 0 00:11:10.370 11:38:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.370 11:38:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.370 --rc genhtml_branch_coverage=1 00:11:10.370 --rc genhtml_function_coverage=1 00:11:10.370 --rc genhtml_legend=1 00:11:10.370 --rc geninfo_all_blocks=1 00:11:10.370 --rc geninfo_unexecuted_blocks=1 00:11:10.370 00:11:10.370 ' 00:11:10.370 11:38:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:10.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.371 --rc genhtml_branch_coverage=1 00:11:10.371 --rc genhtml_function_coverage=1 00:11:10.371 --rc genhtml_legend=1 00:11:10.371 --rc geninfo_all_blocks=1 00:11:10.371 --rc geninfo_unexecuted_blocks=1 00:11:10.371 00:11:10.371 ' 00:11:10.371 11:38:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:10.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.371 --rc genhtml_branch_coverage=1 00:11:10.371 --rc genhtml_function_coverage=1 00:11:10.371 --rc genhtml_legend=1 00:11:10.371 --rc geninfo_all_blocks=1 00:11:10.371 --rc geninfo_unexecuted_blocks=1 00:11:10.371 00:11:10.371 ' 00:11:10.371 11:38:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:10.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.371 --rc genhtml_branch_coverage=1 00:11:10.371 --rc genhtml_function_coverage=1 00:11:10.371 --rc genhtml_legend=1 00:11:10.371 --rc geninfo_all_blocks=1 00:11:10.371 --rc geninfo_unexecuted_blocks=1 00:11:10.371 00:11:10.371 ' 00:11:10.371 11:38:43 -- app/version.sh@17 -- # get_header_version major 00:11:10.371 11:38:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.371 11:38:43 -- app/version.sh@14 -- # cut -f2 00:11:10.371 11:38:43 -- app/version.sh@14 -- # tr -d '"' 00:11:10.371 11:38:43 -- app/version.sh@17 -- # major=24 00:11:10.371 11:38:43 -- app/version.sh@18 -- # get_header_version minor 00:11:10.371 11:38:43 -- app/version.sh@14 -- # cut -f2 00:11:10.371 11:38:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.371 11:38:43 -- app/version.sh@14 -- # tr -d '"' 00:11:10.371 11:38:43 -- app/version.sh@18 -- # minor=1 00:11:10.371 11:38:43 -- app/version.sh@19 -- # get_header_version patch 00:11:10.371 11:38:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.371 11:38:43 -- app/version.sh@14 -- # cut -f2 00:11:10.371 11:38:43 -- app/version.sh@14 -- # tr -d '"' 00:11:10.631 11:38:43 -- app/version.sh@19 -- # patch=1 00:11:10.631 11:38:43 -- app/version.sh@20 -- # get_header_version suffix 00:11:10.631 11:38:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.631 11:38:43 -- app/version.sh@14 -- # cut -f2 00:11:10.631 11:38:43 -- app/version.sh@14 -- # tr -d '"' 00:11:10.631 11:38:43 -- app/version.sh@20 -- # suffix=-pre 00:11:10.631 11:38:43 -- app/version.sh@22 -- # version=24.1 00:11:10.631 11:38:43 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:10.631 11:38:43 -- app/version.sh@25 -- # version=24.1.1 00:11:10.631 11:38:43 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:10.631 11:38:43 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:10.631 11:38:43 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:10.631 11:38:43 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:10.631 11:38:43 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:10.631 00:11:10.631 real 0m0.229s 00:11:10.631 user 0m0.128s 00:11:10.631 sys 0m0.138s 00:11:10.631 11:38:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.631 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.631 ************************************ 00:11:10.631 END TEST version 00:11:10.631 ************************************ 00:11:10.631 11:38:43 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@191 -- # uname -s 00:11:10.631 11:38:43 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:11:10.631 11:38:43 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:11:10.631 11:38:43 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:11:10.631 11:38:43 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@255 -- # timing_exit lib 00:11:10.631 11:38:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.631 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.631 11:38:43 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:11:10.631 11:38:43 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:11:10.631 11:38:43 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:10.631 11:38:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.631 11:38:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.631 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.631 ************************************ 00:11:10.631 START TEST nvmf_tcp 00:11:10.631 ************************************ 00:11:10.631 11:38:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:10.631 * Looking for test storage... 00:11:10.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:10.631 11:38:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.631 11:38:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:10.631 11:38:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:10.891 11:38:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:10.891 11:38:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:10.891 11:38:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:10.891 11:38:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:10.891 11:38:43 -- scripts/common.sh@335 -- # IFS=.-: 00:11:10.891 11:38:43 -- scripts/common.sh@335 -- # read -ra ver1 00:11:10.891 11:38:43 -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.891 11:38:43 -- scripts/common.sh@336 -- # read -ra ver2 00:11:10.891 11:38:43 -- scripts/common.sh@337 -- # local 'op=<' 00:11:10.891 11:38:43 -- scripts/common.sh@339 -- # ver1_l=2 00:11:10.891 11:38:43 -- scripts/common.sh@340 -- # ver2_l=1 00:11:10.891 11:38:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:10.891 11:38:43 -- scripts/common.sh@343 -- # case "$op" in 00:11:10.891 11:38:43 -- scripts/common.sh@344 -- # : 1 00:11:10.891 11:38:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:10.891 11:38:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.891 11:38:43 -- scripts/common.sh@364 -- # decimal 1 00:11:10.891 11:38:43 -- scripts/common.sh@352 -- # local d=1 00:11:10.891 11:38:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.891 11:38:43 -- scripts/common.sh@354 -- # echo 1 00:11:10.892 11:38:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:10.892 11:38:43 -- scripts/common.sh@365 -- # decimal 2 00:11:10.892 11:38:43 -- scripts/common.sh@352 -- # local d=2 00:11:10.892 11:38:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.892 11:38:43 -- scripts/common.sh@354 -- # echo 2 00:11:10.892 11:38:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:10.892 11:38:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:10.892 11:38:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:10.892 11:38:43 -- scripts/common.sh@367 -- # return 0 00:11:10.892 11:38:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.892 11:38:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:10.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.892 --rc genhtml_branch_coverage=1 00:11:10.892 --rc genhtml_function_coverage=1 00:11:10.892 --rc genhtml_legend=1 00:11:10.892 --rc geninfo_all_blocks=1 00:11:10.892 --rc geninfo_unexecuted_blocks=1 00:11:10.892 00:11:10.892 ' 00:11:10.892 11:38:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:10.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.892 --rc genhtml_branch_coverage=1 00:11:10.892 --rc genhtml_function_coverage=1 00:11:10.892 --rc genhtml_legend=1 00:11:10.892 --rc geninfo_all_blocks=1 00:11:10.892 --rc geninfo_unexecuted_blocks=1 00:11:10.892 00:11:10.892 ' 00:11:10.892 11:38:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:10.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.892 --rc genhtml_branch_coverage=1 00:11:10.892 --rc genhtml_function_coverage=1 00:11:10.892 --rc genhtml_legend=1 00:11:10.892 --rc geninfo_all_blocks=1 00:11:10.892 --rc geninfo_unexecuted_blocks=1 00:11:10.892 00:11:10.892 ' 00:11:10.892 11:38:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:10.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.892 --rc genhtml_branch_coverage=1 00:11:10.892 --rc genhtml_function_coverage=1 00:11:10.892 --rc genhtml_legend=1 00:11:10.892 --rc geninfo_all_blocks=1 00:11:10.892 --rc geninfo_unexecuted_blocks=1 00:11:10.892 00:11:10.892 ' 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@10 -- # uname -s 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.892 11:38:43 -- nvmf/common.sh@7 -- # uname -s 00:11:10.892 11:38:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.892 11:38:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.892 11:38:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.892 11:38:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.892 11:38:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.892 11:38:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.892 11:38:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.892 11:38:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.892 11:38:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.892 11:38:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.892 11:38:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:11:10.892 11:38:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:11:10.892 11:38:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.892 11:38:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.892 11:38:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.892 11:38:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.892 11:38:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.892 11:38:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.892 11:38:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.892 11:38:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.892 11:38:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.892 11:38:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.892 11:38:43 -- paths/export.sh@5 -- # export PATH 00:11:10.892 11:38:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.892 11:38:43 -- nvmf/common.sh@46 -- # : 0 00:11:10.892 11:38:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:10.892 11:38:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:10.892 11:38:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:10.892 11:38:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.892 11:38:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.892 11:38:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:10.892 11:38:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:10.892 11:38:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:11:10.892 11:38:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.892 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:11:10.892 11:38:43 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:10.892 11:38:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.892 11:38:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.892 11:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.892 ************************************ 00:11:10.892 START TEST nvmf_example 00:11:10.892 ************************************ 00:11:10.892 11:38:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:10.892 * Looking for test storage... 00:11:10.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.892 11:38:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.892 11:38:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:10.892 11:38:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:11.152 11:38:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:11.152 11:38:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:11.152 11:38:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:11.152 11:38:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:11.152 11:38:43 -- scripts/common.sh@335 -- # IFS=.-: 00:11:11.152 11:38:43 -- scripts/common.sh@335 -- # read -ra ver1 00:11:11.152 11:38:43 -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.152 11:38:43 -- scripts/common.sh@336 -- # read -ra ver2 00:11:11.152 11:38:43 -- scripts/common.sh@337 -- # local 'op=<' 00:11:11.152 11:38:43 -- scripts/common.sh@339 -- # ver1_l=2 00:11:11.152 11:38:43 -- scripts/common.sh@340 -- # ver2_l=1 00:11:11.152 11:38:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:11.152 11:38:43 -- scripts/common.sh@343 -- # case "$op" in 00:11:11.152 11:38:43 -- scripts/common.sh@344 -- # : 1 00:11:11.152 11:38:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:11.152 11:38:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.152 11:38:43 -- scripts/common.sh@364 -- # decimal 1 00:11:11.152 11:38:43 -- scripts/common.sh@352 -- # local d=1 00:11:11.152 11:38:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.152 11:38:43 -- scripts/common.sh@354 -- # echo 1 00:11:11.152 11:38:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:11.152 11:38:43 -- scripts/common.sh@365 -- # decimal 2 00:11:11.152 11:38:43 -- scripts/common.sh@352 -- # local d=2 00:11:11.152 11:38:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.152 11:38:43 -- scripts/common.sh@354 -- # echo 2 00:11:11.152 11:38:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:11.152 11:38:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:11.152 11:38:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:11.152 11:38:43 -- scripts/common.sh@367 -- # return 0 00:11:11.152 11:38:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.152 11:38:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:11.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.152 --rc genhtml_branch_coverage=1 00:11:11.152 --rc genhtml_function_coverage=1 00:11:11.152 --rc genhtml_legend=1 00:11:11.152 --rc geninfo_all_blocks=1 00:11:11.152 --rc geninfo_unexecuted_blocks=1 00:11:11.152 00:11:11.152 ' 00:11:11.152 11:38:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:11.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.152 --rc genhtml_branch_coverage=1 00:11:11.152 --rc genhtml_function_coverage=1 00:11:11.152 --rc genhtml_legend=1 00:11:11.152 --rc geninfo_all_blocks=1 00:11:11.152 --rc geninfo_unexecuted_blocks=1 00:11:11.152 00:11:11.152 ' 00:11:11.152 11:38:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:11.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.152 --rc genhtml_branch_coverage=1 00:11:11.152 --rc genhtml_function_coverage=1 00:11:11.152 --rc genhtml_legend=1 00:11:11.152 --rc geninfo_all_blocks=1 00:11:11.152 --rc geninfo_unexecuted_blocks=1 00:11:11.152 00:11:11.152 ' 00:11:11.152 11:38:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:11.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.152 --rc genhtml_branch_coverage=1 00:11:11.152 --rc genhtml_function_coverage=1 00:11:11.152 --rc genhtml_legend=1 00:11:11.152 --rc geninfo_all_blocks=1 00:11:11.152 --rc geninfo_unexecuted_blocks=1 00:11:11.152 00:11:11.152 ' 00:11:11.152 11:38:43 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.152 11:38:43 -- nvmf/common.sh@7 -- # uname -s 00:11:11.152 11:38:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.152 11:38:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.152 11:38:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.152 11:38:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.152 11:38:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.152 11:38:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.152 11:38:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.152 11:38:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.152 11:38:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.152 11:38:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.152 11:38:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:11:11.152 11:38:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:11:11.152 11:38:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.152 11:38:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.152 11:38:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.152 11:38:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.152 11:38:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.152 11:38:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.152 11:38:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.152 11:38:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.152 11:38:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.152 11:38:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.152 11:38:44 -- paths/export.sh@5 -- # export PATH 00:11:11.152 11:38:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.152 11:38:44 -- nvmf/common.sh@46 -- # : 0 00:11:11.152 11:38:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:11.152 11:38:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:11.152 11:38:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:11.152 11:38:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.152 11:38:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.152 11:38:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:11.152 11:38:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:11.152 11:38:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:11.152 11:38:44 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:11.152 11:38:44 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:11.152 11:38:44 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:11.152 11:38:44 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:11.152 11:38:44 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:11.152 11:38:44 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:11.152 11:38:44 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:11.152 11:38:44 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:11.152 11:38:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.152 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:11:11.152 11:38:44 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:11.152 11:38:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:11.152 11:38:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.152 11:38:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:11.152 11:38:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:11.152 11:38:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:11.152 11:38:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.152 11:38:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.152 11:38:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.152 11:38:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:11.152 11:38:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:11.153 11:38:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:11.153 11:38:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:11.153 11:38:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:11.153 11:38:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:11.153 11:38:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.153 11:38:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.153 11:38:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:11.153 11:38:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:11.153 11:38:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.153 11:38:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.153 11:38:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.153 11:38:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.153 11:38:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.153 11:38:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.153 11:38:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.153 11:38:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.153 11:38:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:11.153 Cannot find device "nvmf_init_br" 00:11:11.153 11:38:44 -- nvmf/common.sh@153 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:11.153 Cannot find device "nvmf_tgt_br" 00:11:11.153 11:38:44 -- nvmf/common.sh@154 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.153 Cannot find device "nvmf_tgt_br2" 00:11:11.153 11:38:44 -- nvmf/common.sh@155 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:11.153 Cannot find device "nvmf_init_br" 00:11:11.153 11:38:44 -- nvmf/common.sh@156 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:11.153 Cannot find device "nvmf_tgt_br" 00:11:11.153 11:38:44 -- nvmf/common.sh@157 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:11.153 Cannot find device "nvmf_tgt_br2" 00:11:11.153 11:38:44 -- nvmf/common.sh@158 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:11.153 Cannot find device "nvmf_br" 00:11:11.153 11:38:44 -- nvmf/common.sh@159 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:11.153 Cannot find device "nvmf_init_if" 00:11:11.153 11:38:44 -- nvmf/common.sh@160 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.153 11:38:44 -- nvmf/common.sh@161 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.153 11:38:44 -- nvmf/common.sh@162 -- # true 00:11:11.153 11:38:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.153 11:38:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.153 11:38:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.412 11:38:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.412 11:38:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.412 11:38:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.412 11:38:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.412 11:38:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.412 11:38:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.412 11:38:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:11.412 11:38:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:11.412 11:38:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:11.412 11:38:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:11.412 11:38:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.412 11:38:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.412 11:38:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.412 11:38:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:11.412 11:38:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:11.412 11:38:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.412 11:38:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.412 11:38:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.412 11:38:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.412 11:38:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.412 11:38:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:11.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:11:11.412 00:11:11.412 --- 10.0.0.2 ping statistics --- 00:11:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.412 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:11.412 11:38:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:11.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:11.412 00:11:11.412 --- 10.0.0.3 ping statistics --- 00:11:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.412 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:11.412 11:38:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:11.412 00:11:11.412 --- 10.0.0.1 ping statistics --- 00:11:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.412 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:11.412 11:38:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.412 11:38:44 -- nvmf/common.sh@421 -- # return 0 00:11:11.412 11:38:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:11.412 11:38:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.412 11:38:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:11.412 11:38:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:11.412 11:38:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.412 11:38:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:11.412 11:38:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:11.412 11:38:44 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:11.412 11:38:44 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:11.412 11:38:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.412 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:11:11.412 11:38:44 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:11.412 11:38:44 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:11.412 11:38:44 -- target/nvmf_example.sh@34 -- # nvmfpid=60327 00:11:11.412 11:38:44 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.412 11:38:44 -- target/nvmf_example.sh@36 -- # waitforlisten 60327 00:11:11.412 11:38:44 -- common/autotest_common.sh@829 -- # '[' -z 60327 ']' 00:11:11.412 11:38:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.412 11:38:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.412 11:38:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.412 11:38:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.412 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:11:11.412 11:38:44 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:12.359 11:38:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.359 11:38:45 -- common/autotest_common.sh@862 -- # return 0 00:11:12.359 11:38:45 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:12.359 11:38:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.359 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.619 11:38:45 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:12.619 11:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.619 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.619 11:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.619 11:38:45 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:12.619 11:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.619 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.619 11:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.619 11:38:45 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:12.619 11:38:45 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.619 11:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.619 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.619 11:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.619 11:38:45 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:12.619 11:38:45 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.619 11:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.619 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.619 11:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.619 11:38:45 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.619 11:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.619 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.619 11:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.619 11:38:45 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:11:12.619 11:38:45 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:24.869 Initializing NVMe Controllers 00:11:24.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:24.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:24.869 Initialization complete. Launching workers. 00:11:24.869 ======================================================== 00:11:24.869 Latency(us) 00:11:24.869 Device Information : IOPS MiB/s Average min max 00:11:24.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15234.42 59.51 4200.65 661.46 23179.40 00:11:24.869 ======================================================== 00:11:24.869 Total : 15234.42 59.51 4200.65 661.46 23179.40 00:11:24.869 00:11:24.869 11:38:55 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:24.869 11:38:55 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:24.869 11:38:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:24.869 11:38:55 -- nvmf/common.sh@116 -- # sync 00:11:24.869 11:38:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:24.869 11:38:55 -- nvmf/common.sh@119 -- # set +e 00:11:24.869 11:38:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:24.869 11:38:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:24.869 rmmod nvme_tcp 00:11:24.869 rmmod nvme_fabrics 00:11:24.869 rmmod nvme_keyring 00:11:24.869 11:38:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:24.869 11:38:55 -- nvmf/common.sh@123 -- # set -e 00:11:24.869 11:38:55 -- nvmf/common.sh@124 -- # return 0 00:11:24.869 11:38:55 -- nvmf/common.sh@477 -- # '[' -n 60327 ']' 00:11:24.869 11:38:55 -- nvmf/common.sh@478 -- # killprocess 60327 00:11:24.869 11:38:55 -- common/autotest_common.sh@936 -- # '[' -z 60327 ']' 00:11:24.869 11:38:55 -- common/autotest_common.sh@940 -- # kill -0 60327 00:11:24.869 11:38:55 -- common/autotest_common.sh@941 -- # uname 00:11:24.869 11:38:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:24.869 11:38:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60327 00:11:24.869 killing process with pid 60327 00:11:24.869 11:38:55 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:11:24.869 11:38:55 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:11:24.869 11:38:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60327' 00:11:24.869 11:38:55 -- common/autotest_common.sh@955 -- # kill 60327 00:11:24.869 11:38:55 -- common/autotest_common.sh@960 -- # wait 60327 00:11:24.869 nvmf threads initialize successfully 00:11:24.869 bdev subsystem init successfully 00:11:24.869 created a nvmf target service 00:11:24.869 create targets's poll groups done 00:11:24.869 all subsystems of target started 00:11:24.869 nvmf target is running 00:11:24.869 all subsystems of target stopped 00:11:24.869 destroy targets's poll groups done 00:11:24.869 destroyed the nvmf target service 00:11:24.869 bdev subsystem finish successfully 00:11:24.869 nvmf threads destroy successfully 00:11:24.869 11:38:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:24.869 11:38:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:24.869 11:38:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:24.869 11:38:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.869 11:38:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:24.869 11:38:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.869 11:38:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.869 11:38:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.869 11:38:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:24.869 11:38:56 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:24.869 11:38:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:24.869 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:24.869 00:11:24.869 real 0m12.243s 00:11:24.869 user 0m44.071s 00:11:24.869 sys 0m1.714s 00:11:24.869 11:38:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.869 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:24.869 ************************************ 00:11:24.869 END TEST nvmf_example 00:11:24.869 ************************************ 00:11:24.869 11:38:56 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.869 11:38:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:24.869 11:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.869 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:24.869 ************************************ 00:11:24.869 START TEST nvmf_filesystem 00:11:24.869 ************************************ 00:11:24.869 11:38:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.869 * Looking for test storage... 00:11:24.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.869 11:38:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:24.869 11:38:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:24.869 11:38:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:24.869 11:38:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:24.869 11:38:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:24.869 11:38:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:24.869 11:38:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:24.869 11:38:56 -- scripts/common.sh@335 -- # IFS=.-: 00:11:24.869 11:38:56 -- scripts/common.sh@335 -- # read -ra ver1 00:11:24.869 11:38:56 -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.869 11:38:56 -- scripts/common.sh@336 -- # read -ra ver2 00:11:24.869 11:38:56 -- scripts/common.sh@337 -- # local 'op=<' 00:11:24.869 11:38:56 -- scripts/common.sh@339 -- # ver1_l=2 00:11:24.869 11:38:56 -- scripts/common.sh@340 -- # ver2_l=1 00:11:24.869 11:38:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:24.869 11:38:56 -- scripts/common.sh@343 -- # case "$op" in 00:11:24.869 11:38:56 -- scripts/common.sh@344 -- # : 1 00:11:24.869 11:38:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:24.869 11:38:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.869 11:38:56 -- scripts/common.sh@364 -- # decimal 1 00:11:24.869 11:38:56 -- scripts/common.sh@352 -- # local d=1 00:11:24.869 11:38:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.869 11:38:56 -- scripts/common.sh@354 -- # echo 1 00:11:24.870 11:38:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:24.870 11:38:56 -- scripts/common.sh@365 -- # decimal 2 00:11:24.870 11:38:56 -- scripts/common.sh@352 -- # local d=2 00:11:24.870 11:38:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.870 11:38:56 -- scripts/common.sh@354 -- # echo 2 00:11:24.870 11:38:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:24.870 11:38:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:24.870 11:38:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:24.870 11:38:56 -- scripts/common.sh@367 -- # return 0 00:11:24.870 11:38:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.870 11:38:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.870 --rc genhtml_branch_coverage=1 00:11:24.870 --rc genhtml_function_coverage=1 00:11:24.870 --rc genhtml_legend=1 00:11:24.870 --rc geninfo_all_blocks=1 00:11:24.870 --rc geninfo_unexecuted_blocks=1 00:11:24.870 00:11:24.870 ' 00:11:24.870 11:38:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.870 --rc genhtml_branch_coverage=1 00:11:24.870 --rc genhtml_function_coverage=1 00:11:24.870 --rc genhtml_legend=1 00:11:24.870 --rc geninfo_all_blocks=1 00:11:24.870 --rc geninfo_unexecuted_blocks=1 00:11:24.870 00:11:24.870 ' 00:11:24.870 11:38:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.870 --rc genhtml_branch_coverage=1 00:11:24.870 --rc genhtml_function_coverage=1 00:11:24.870 --rc genhtml_legend=1 00:11:24.870 --rc geninfo_all_blocks=1 00:11:24.870 --rc geninfo_unexecuted_blocks=1 00:11:24.870 00:11:24.870 ' 00:11:24.870 11:38:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.870 --rc genhtml_branch_coverage=1 00:11:24.870 --rc genhtml_function_coverage=1 00:11:24.870 --rc genhtml_legend=1 00:11:24.870 --rc geninfo_all_blocks=1 00:11:24.870 --rc geninfo_unexecuted_blocks=1 00:11:24.870 00:11:24.870 ' 00:11:24.870 11:38:56 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:24.870 11:38:56 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:24.870 11:38:56 -- common/autotest_common.sh@34 -- # set -e 00:11:24.870 11:38:56 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:24.870 11:38:56 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:24.870 11:38:56 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:24.870 11:38:56 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:24.870 11:38:56 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:24.870 11:38:56 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:24.870 11:38:56 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:24.870 11:38:56 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:24.870 11:38:56 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:24.870 11:38:56 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:24.870 11:38:56 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:24.870 11:38:56 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:24.870 11:38:56 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:24.870 11:38:56 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:24.870 11:38:56 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:24.870 11:38:56 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:24.870 11:38:56 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:24.870 11:38:56 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:24.870 11:38:56 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:24.870 11:38:56 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:24.870 11:38:56 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:24.870 11:38:56 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:24.870 11:38:56 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:24.870 11:38:56 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:24.870 11:38:56 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:24.870 11:38:56 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:24.870 11:38:56 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:24.870 11:38:56 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:24.870 11:38:56 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:24.870 11:38:56 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:24.870 11:38:56 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:24.870 11:38:56 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:24.870 11:38:56 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:24.870 11:38:56 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:24.870 11:38:56 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:24.870 11:38:56 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:24.870 11:38:56 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:24.870 11:38:56 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:24.870 11:38:56 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:24.870 11:38:56 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:24.870 11:38:56 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:24.870 11:38:56 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:24.870 11:38:56 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:24.870 11:38:56 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:24.870 11:38:56 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:11:24.870 11:38:56 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:11:24.870 11:38:56 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:24.870 11:38:56 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:11:24.870 11:38:56 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:11:24.870 11:38:56 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:11:24.870 11:38:56 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:11:24.870 11:38:56 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:11:24.870 11:38:56 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:11:24.870 11:38:56 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:11:24.870 11:38:56 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:11:24.870 11:38:56 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:11:24.870 11:38:56 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:11:24.870 11:38:56 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:11:24.870 11:38:56 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:11:24.870 11:38:56 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:11:24.870 11:38:56 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:11:24.870 11:38:56 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:11:24.870 11:38:56 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:11:24.870 11:38:56 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:24.870 11:38:56 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:11:24.870 11:38:56 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:11:24.870 11:38:56 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:11:24.870 11:38:56 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:11:24.870 11:38:56 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:11:24.870 11:38:56 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:11:24.870 11:38:56 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:11:24.870 11:38:56 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:11:24.870 11:38:56 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:11:24.870 11:38:56 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:11:24.870 11:38:56 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:24.870 11:38:56 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:11:24.870 11:38:56 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:11:24.870 11:38:56 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:24.870 11:38:56 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:24.870 11:38:56 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:24.870 11:38:56 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:24.870 11:38:56 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:24.870 11:38:56 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:24.870 11:38:56 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:24.870 11:38:56 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:24.870 11:38:56 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:24.870 11:38:56 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:24.870 11:38:56 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:24.870 11:38:56 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:24.870 11:38:56 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:24.871 11:38:56 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:24.871 11:38:56 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:24.871 11:38:56 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:24.871 #define SPDK_CONFIG_H 00:11:24.871 #define SPDK_CONFIG_APPS 1 00:11:24.871 #define SPDK_CONFIG_ARCH native 00:11:24.871 #undef SPDK_CONFIG_ASAN 00:11:24.871 #define SPDK_CONFIG_AVAHI 1 00:11:24.871 #undef SPDK_CONFIG_CET 00:11:24.871 #define SPDK_CONFIG_COVERAGE 1 00:11:24.871 #define SPDK_CONFIG_CROSS_PREFIX 00:11:24.871 #undef SPDK_CONFIG_CRYPTO 00:11:24.871 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:24.871 #undef SPDK_CONFIG_CUSTOMOCF 00:11:24.871 #undef SPDK_CONFIG_DAOS 00:11:24.871 #define SPDK_CONFIG_DAOS_DIR 00:11:24.871 #define SPDK_CONFIG_DEBUG 1 00:11:24.871 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:24.871 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:24.871 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:24.871 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:24.871 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:24.871 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:24.871 #define SPDK_CONFIG_EXAMPLES 1 00:11:24.871 #undef SPDK_CONFIG_FC 00:11:24.871 #define SPDK_CONFIG_FC_PATH 00:11:24.871 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:24.871 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:24.871 #undef SPDK_CONFIG_FUSE 00:11:24.871 #undef SPDK_CONFIG_FUZZER 00:11:24.871 #define SPDK_CONFIG_FUZZER_LIB 00:11:24.871 #define SPDK_CONFIG_GOLANG 1 00:11:24.871 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:24.871 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:24.871 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:24.871 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:24.871 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:24.871 #define SPDK_CONFIG_IDXD 1 00:11:24.871 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:24.871 #undef SPDK_CONFIG_IPSEC_MB 00:11:24.871 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:24.871 #define SPDK_CONFIG_ISAL 1 00:11:24.871 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:24.871 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:24.871 #define SPDK_CONFIG_LIBDIR 00:11:24.871 #undef SPDK_CONFIG_LTO 00:11:24.871 #define SPDK_CONFIG_MAX_LCORES 00:11:24.871 #define SPDK_CONFIG_NVME_CUSE 1 00:11:24.871 #undef SPDK_CONFIG_OCF 00:11:24.871 #define SPDK_CONFIG_OCF_PATH 00:11:24.871 #define SPDK_CONFIG_OPENSSL_PATH 00:11:24.871 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:24.871 #undef SPDK_CONFIG_PGO_USE 00:11:24.871 #define SPDK_CONFIG_PREFIX /usr/local 00:11:24.871 #undef SPDK_CONFIG_RAID5F 00:11:24.871 #undef SPDK_CONFIG_RBD 00:11:24.871 #define SPDK_CONFIG_RDMA 1 00:11:24.871 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:24.871 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:24.871 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:24.871 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:24.871 #define SPDK_CONFIG_SHARED 1 00:11:24.871 #undef SPDK_CONFIG_SMA 00:11:24.871 #define SPDK_CONFIG_TESTS 1 00:11:24.871 #undef SPDK_CONFIG_TSAN 00:11:24.871 #define SPDK_CONFIG_UBLK 1 00:11:24.871 #define SPDK_CONFIG_UBSAN 1 00:11:24.871 #undef SPDK_CONFIG_UNIT_TESTS 00:11:24.871 #undef SPDK_CONFIG_URING 00:11:24.871 #define SPDK_CONFIG_URING_PATH 00:11:24.871 #undef SPDK_CONFIG_URING_ZNS 00:11:24.871 #define SPDK_CONFIG_USDT 1 00:11:24.871 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:24.871 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:24.871 #define SPDK_CONFIG_VFIO_USER 1 00:11:24.871 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:24.871 #define SPDK_CONFIG_VHOST 1 00:11:24.871 #define SPDK_CONFIG_VIRTIO 1 00:11:24.871 #undef SPDK_CONFIG_VTUNE 00:11:24.871 #define SPDK_CONFIG_VTUNE_DIR 00:11:24.871 #define SPDK_CONFIG_WERROR 1 00:11:24.871 #define SPDK_CONFIG_WPDK_DIR 00:11:24.871 #undef SPDK_CONFIG_XNVME 00:11:24.871 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:24.871 11:38:56 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:24.871 11:38:56 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.871 11:38:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.871 11:38:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.871 11:38:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.871 11:38:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.871 11:38:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.871 11:38:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.871 11:38:56 -- paths/export.sh@5 -- # export PATH 00:11:24.871 11:38:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.871 11:38:56 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:24.871 11:38:56 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:24.871 11:38:56 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:24.871 11:38:56 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:24.871 11:38:56 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:24.871 11:38:56 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:24.871 11:38:56 -- pm/common@16 -- # TEST_TAG=N/A 00:11:24.871 11:38:56 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:24.871 11:38:56 -- common/autotest_common.sh@52 -- # : 1 00:11:24.871 11:38:56 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:11:24.871 11:38:56 -- common/autotest_common.sh@56 -- # : 0 00:11:24.871 11:38:56 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:24.871 11:38:56 -- common/autotest_common.sh@58 -- # : 0 00:11:24.871 11:38:56 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:11:24.871 11:38:56 -- common/autotest_common.sh@60 -- # : 1 00:11:24.871 11:38:56 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:24.872 11:38:56 -- common/autotest_common.sh@62 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:11:24.872 11:38:56 -- common/autotest_common.sh@64 -- # : 00:11:24.872 11:38:56 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:11:24.872 11:38:56 -- common/autotest_common.sh@66 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:11:24.872 11:38:56 -- common/autotest_common.sh@68 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:11:24.872 11:38:56 -- common/autotest_common.sh@70 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:11:24.872 11:38:56 -- common/autotest_common.sh@72 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:24.872 11:38:56 -- common/autotest_common.sh@74 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:11:24.872 11:38:56 -- common/autotest_common.sh@76 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:11:24.872 11:38:56 -- common/autotest_common.sh@78 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:11:24.872 11:38:56 -- common/autotest_common.sh@80 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:11:24.872 11:38:56 -- common/autotest_common.sh@82 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:11:24.872 11:38:56 -- common/autotest_common.sh@84 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:11:24.872 11:38:56 -- common/autotest_common.sh@86 -- # : 1 00:11:24.872 11:38:56 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:11:24.872 11:38:56 -- common/autotest_common.sh@88 -- # : 1 00:11:24.872 11:38:56 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:11:24.872 11:38:56 -- common/autotest_common.sh@90 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:24.872 11:38:56 -- common/autotest_common.sh@92 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:11:24.872 11:38:56 -- common/autotest_common.sh@94 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:11:24.872 11:38:56 -- common/autotest_common.sh@96 -- # : tcp 00:11:24.872 11:38:56 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:24.872 11:38:56 -- common/autotest_common.sh@98 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:11:24.872 11:38:56 -- common/autotest_common.sh@100 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:11:24.872 11:38:56 -- common/autotest_common.sh@102 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:11:24.872 11:38:56 -- common/autotest_common.sh@104 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:11:24.872 11:38:56 -- common/autotest_common.sh@106 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:11:24.872 11:38:56 -- common/autotest_common.sh@108 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:11:24.872 11:38:56 -- common/autotest_common.sh@110 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:11:24.872 11:38:56 -- common/autotest_common.sh@112 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:24.872 11:38:56 -- common/autotest_common.sh@114 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:11:24.872 11:38:56 -- common/autotest_common.sh@116 -- # : 1 00:11:24.872 11:38:56 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:11:24.872 11:38:56 -- common/autotest_common.sh@118 -- # : 00:11:24.872 11:38:56 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:24.872 11:38:56 -- common/autotest_common.sh@120 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:11:24.872 11:38:56 -- common/autotest_common.sh@122 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:11:24.872 11:38:56 -- common/autotest_common.sh@124 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:11:24.872 11:38:56 -- common/autotest_common.sh@126 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:11:24.872 11:38:56 -- common/autotest_common.sh@128 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:11:24.872 11:38:56 -- common/autotest_common.sh@130 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:11:24.872 11:38:56 -- common/autotest_common.sh@132 -- # : 00:11:24.872 11:38:56 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:11:24.872 11:38:56 -- common/autotest_common.sh@134 -- # : true 00:11:24.872 11:38:56 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:11:24.872 11:38:56 -- common/autotest_common.sh@136 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:11:24.872 11:38:56 -- common/autotest_common.sh@138 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:11:24.872 11:38:56 -- common/autotest_common.sh@140 -- # : 1 00:11:24.872 11:38:56 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:11:24.872 11:38:56 -- common/autotest_common.sh@142 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:11:24.872 11:38:56 -- common/autotest_common.sh@144 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:11:24.872 11:38:56 -- common/autotest_common.sh@146 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:11:24.872 11:38:56 -- common/autotest_common.sh@148 -- # : 00:11:24.872 11:38:56 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:11:24.872 11:38:56 -- common/autotest_common.sh@150 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:11:24.872 11:38:56 -- common/autotest_common.sh@152 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:11:24.872 11:38:56 -- common/autotest_common.sh@154 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:11:24.872 11:38:56 -- common/autotest_common.sh@156 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:11:24.872 11:38:56 -- common/autotest_common.sh@158 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:11:24.872 11:38:56 -- common/autotest_common.sh@160 -- # : 0 00:11:24.872 11:38:56 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:11:24.872 11:38:56 -- common/autotest_common.sh@163 -- # : 00:11:24.872 11:38:56 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:11:24.872 11:38:56 -- common/autotest_common.sh@165 -- # : 1 00:11:24.872 11:38:56 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:11:24.872 11:38:56 -- common/autotest_common.sh@167 -- # : 1 00:11:24.872 11:38:56 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:24.872 11:38:56 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:24.872 11:38:56 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.872 11:38:56 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.872 11:38:56 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:24.873 11:38:56 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:24.873 11:38:56 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:24.873 11:38:56 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:11:24.873 11:38:56 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.873 11:38:56 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.873 11:38:56 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.873 11:38:56 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.873 11:38:56 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:24.873 11:38:56 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:11:24.873 11:38:56 -- common/autotest_common.sh@196 -- # cat 00:11:24.873 11:38:56 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:11:24.873 11:38:56 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.873 11:38:56 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.873 11:38:56 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.873 11:38:56 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.873 11:38:56 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:11:24.873 11:38:56 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:11:24.873 11:38:56 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:24.873 11:38:56 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:24.873 11:38:56 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:24.873 11:38:56 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:24.873 11:38:56 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.873 11:38:56 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.873 11:38:56 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.873 11:38:56 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.873 11:38:56 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:24.873 11:38:56 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:24.873 11:38:56 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.873 11:38:56 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.873 11:38:56 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:11:24.873 11:38:56 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:11:24.873 11:38:56 -- common/autotest_common.sh@249 -- # _LCOV= 00:11:24.873 11:38:56 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:24.873 11:38:56 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:11:24.873 11:38:56 -- common/autotest_common.sh@255 -- # lcov_opt= 00:11:24.873 11:38:56 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:11:24.873 11:38:56 -- common/autotest_common.sh@259 -- # export valgrind= 00:11:24.873 11:38:56 -- common/autotest_common.sh@259 -- # valgrind= 00:11:24.873 11:38:56 -- common/autotest_common.sh@265 -- # uname -s 00:11:24.873 11:38:56 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:11:24.873 11:38:56 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:11:24.873 11:38:56 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:11:24.873 11:38:56 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:11:24.873 11:38:56 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@275 -- # MAKE=make 00:11:24.873 11:38:56 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:11:24.873 11:38:56 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:11:24.873 11:38:56 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:11:24.873 11:38:56 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:24.873 11:38:56 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:11:24.873 11:38:56 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:11:24.873 11:38:56 -- common/autotest_common.sh@301 -- # for i in "$@" 00:11:24.873 11:38:56 -- common/autotest_common.sh@302 -- # case "$i" in 00:11:24.873 11:38:56 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:11:24.873 11:38:56 -- common/autotest_common.sh@319 -- # [[ -z 60579 ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@319 -- # kill -0 60579 00:11:24.873 11:38:56 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:11:24.873 11:38:56 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:11:24.873 11:38:56 -- common/autotest_common.sh@332 -- # local mount target_dir 00:11:24.873 11:38:56 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:11:24.873 11:38:56 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:11:24.873 11:38:56 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:11:24.873 11:38:56 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:11:24.873 11:38:56 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.YRS8mL 00:11:24.873 11:38:56 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:24.873 11:38:56 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:11:24.873 11:38:56 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.YRS8mL/tests/target /tmp/spdk.YRS8mL 00:11:24.873 11:38:56 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:11:24.873 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.873 11:38:56 -- common/autotest_common.sh@328 -- # df -T 00:11:24.873 11:38:56 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:11:24.873 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:11:24.873 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:11:24.873 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=14015692800 00:11:24.873 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:11:24.873 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=5552062464 00:11:24.873 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.873 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:11:24.873 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:11:24.873 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:11:24.873 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:11:24.873 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:11:24.873 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.873 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:11:24.873 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:11:24.873 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265171968 00:11:24.873 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:11:24.873 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:11:24.873 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=14015692800 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=5552062464 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266294272 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=135168 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253273600 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253285888 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:11:24.874 11:38:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=93478998016 00:11:24.874 11:38:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:11:24.874 11:38:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=6223781888 00:11:24.874 11:38:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:11:24.874 11:38:56 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:11:24.874 * Looking for test storage... 00:11:24.874 11:38:56 -- common/autotest_common.sh@369 -- # local target_space new_size 00:11:24.874 11:38:56 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:11:24.874 11:38:56 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.874 11:38:56 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:24.874 11:38:56 -- common/autotest_common.sh@373 -- # mount=/home 00:11:24.874 11:38:56 -- common/autotest_common.sh@375 -- # target_space=14015692800 00:11:24.874 11:38:56 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:11:24.874 11:38:56 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:11:24.874 11:38:56 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:11:24.874 11:38:56 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:11:24.874 11:38:56 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:11:24.874 11:38:56 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.874 11:38:56 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.874 11:38:56 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.874 11:38:56 -- common/autotest_common.sh@390 -- # return 0 00:11:24.874 11:38:56 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:11:24.874 11:38:56 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:11:24.874 11:38:56 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:24.874 11:38:56 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:24.874 11:38:56 -- common/autotest_common.sh@1682 -- # true 00:11:24.874 11:38:56 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:11:24.874 11:38:56 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:11:24.874 11:38:56 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:11:24.874 11:38:56 -- common/autotest_common.sh@27 -- # exec 00:11:24.874 11:38:56 -- common/autotest_common.sh@29 -- # exec 00:11:24.874 11:38:56 -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:24.874 11:38:56 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:24.874 11:38:56 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:24.874 11:38:56 -- common/autotest_common.sh@18 -- # set -x 00:11:24.874 11:38:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:24.874 11:38:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:24.874 11:38:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:24.874 11:38:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:24.874 11:38:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:24.874 11:38:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:24.874 11:38:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:24.874 11:38:56 -- scripts/common.sh@335 -- # IFS=.-: 00:11:24.874 11:38:56 -- scripts/common.sh@335 -- # read -ra ver1 00:11:24.874 11:38:56 -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.874 11:38:56 -- scripts/common.sh@336 -- # read -ra ver2 00:11:24.874 11:38:56 -- scripts/common.sh@337 -- # local 'op=<' 00:11:24.874 11:38:56 -- scripts/common.sh@339 -- # ver1_l=2 00:11:24.874 11:38:56 -- scripts/common.sh@340 -- # ver2_l=1 00:11:24.874 11:38:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:24.874 11:38:56 -- scripts/common.sh@343 -- # case "$op" in 00:11:24.874 11:38:56 -- scripts/common.sh@344 -- # : 1 00:11:24.874 11:38:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:24.874 11:38:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.874 11:38:56 -- scripts/common.sh@364 -- # decimal 1 00:11:24.874 11:38:56 -- scripts/common.sh@352 -- # local d=1 00:11:24.875 11:38:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.875 11:38:56 -- scripts/common.sh@354 -- # echo 1 00:11:24.875 11:38:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:24.875 11:38:56 -- scripts/common.sh@365 -- # decimal 2 00:11:24.875 11:38:56 -- scripts/common.sh@352 -- # local d=2 00:11:24.875 11:38:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.875 11:38:56 -- scripts/common.sh@354 -- # echo 2 00:11:24.875 11:38:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:24.875 11:38:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:24.875 11:38:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:24.875 11:38:56 -- scripts/common.sh@367 -- # return 0 00:11:24.875 11:38:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.875 11:38:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:24.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.875 --rc genhtml_branch_coverage=1 00:11:24.875 --rc genhtml_function_coverage=1 00:11:24.875 --rc genhtml_legend=1 00:11:24.875 --rc geninfo_all_blocks=1 00:11:24.875 --rc geninfo_unexecuted_blocks=1 00:11:24.875 00:11:24.875 ' 00:11:24.875 11:38:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:24.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.875 --rc genhtml_branch_coverage=1 00:11:24.875 --rc genhtml_function_coverage=1 00:11:24.875 --rc genhtml_legend=1 00:11:24.875 --rc geninfo_all_blocks=1 00:11:24.875 --rc geninfo_unexecuted_blocks=1 00:11:24.875 00:11:24.875 ' 00:11:24.875 11:38:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:24.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.875 --rc genhtml_branch_coverage=1 00:11:24.875 --rc genhtml_function_coverage=1 00:11:24.875 --rc genhtml_legend=1 00:11:24.875 --rc geninfo_all_blocks=1 00:11:24.875 --rc geninfo_unexecuted_blocks=1 00:11:24.875 00:11:24.875 ' 00:11:24.875 11:38:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:24.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.875 --rc genhtml_branch_coverage=1 00:11:24.875 --rc genhtml_function_coverage=1 00:11:24.875 --rc genhtml_legend=1 00:11:24.875 --rc geninfo_all_blocks=1 00:11:24.875 --rc geninfo_unexecuted_blocks=1 00:11:24.875 00:11:24.875 ' 00:11:24.875 11:38:56 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.875 11:38:56 -- nvmf/common.sh@7 -- # uname -s 00:11:24.875 11:38:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.875 11:38:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.875 11:38:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.875 11:38:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.875 11:38:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.875 11:38:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.875 11:38:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.875 11:38:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.875 11:38:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.875 11:38:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.875 11:38:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:11:24.875 11:38:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:11:24.875 11:38:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.875 11:38:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.875 11:38:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.875 11:38:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.875 11:38:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.875 11:38:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.875 11:38:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.875 11:38:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.875 11:38:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.875 11:38:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.875 11:38:56 -- paths/export.sh@5 -- # export PATH 00:11:24.875 11:38:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.875 11:38:56 -- nvmf/common.sh@46 -- # : 0 00:11:24.875 11:38:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:24.875 11:38:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:24.875 11:38:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:24.875 11:38:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.875 11:38:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.875 11:38:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:24.875 11:38:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:24.875 11:38:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:24.875 11:38:56 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:24.875 11:38:56 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:24.875 11:38:56 -- target/filesystem.sh@15 -- # nvmftestinit 00:11:24.876 11:38:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:24.876 11:38:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.876 11:38:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:24.876 11:38:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:24.876 11:38:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:24.876 11:38:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.876 11:38:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.876 11:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.876 11:38:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:24.876 11:38:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.876 11:38:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.876 11:38:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:24.876 11:38:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:24.876 11:38:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:24.876 11:38:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:24.876 11:38:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:24.876 11:38:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.876 11:38:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:24.876 11:38:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:24.876 11:38:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:24.876 11:38:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:24.876 11:38:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:24.876 11:38:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:24.876 Cannot find device "nvmf_tgt_br" 00:11:24.876 11:38:56 -- nvmf/common.sh@154 -- # true 00:11:24.876 11:38:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.876 Cannot find device "nvmf_tgt_br2" 00:11:24.876 11:38:56 -- nvmf/common.sh@155 -- # true 00:11:24.876 11:38:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:24.876 11:38:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:24.876 Cannot find device "nvmf_tgt_br" 00:11:24.876 11:38:56 -- nvmf/common.sh@157 -- # true 00:11:24.876 11:38:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:24.876 Cannot find device "nvmf_tgt_br2" 00:11:24.876 11:38:56 -- nvmf/common.sh@158 -- # true 00:11:24.876 11:38:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:24.876 11:38:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:24.876 11:38:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.876 11:38:56 -- nvmf/common.sh@161 -- # true 00:11:24.876 11:38:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.876 11:38:56 -- nvmf/common.sh@162 -- # true 00:11:24.876 11:38:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:24.876 11:38:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:24.876 11:38:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:24.876 11:38:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:24.876 11:38:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:24.876 11:38:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:24.876 11:38:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:24.876 11:38:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:24.876 11:38:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:24.876 11:38:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:24.876 11:38:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:24.876 11:38:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:24.876 11:38:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:24.876 11:38:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:24.876 11:38:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:24.876 11:38:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:24.876 11:38:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:24.876 11:38:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:24.876 11:38:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:24.876 11:38:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:24.876 11:38:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:24.876 11:38:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:24.876 11:38:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:24.876 11:38:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:24.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:24.876 00:11:24.876 --- 10.0.0.2 ping statistics --- 00:11:24.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.876 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:24.876 11:38:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:24.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:24.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:24.876 00:11:24.876 --- 10.0.0.3 ping statistics --- 00:11:24.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.876 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:24.876 11:38:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:24.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:24.876 00:11:24.876 --- 10.0.0.1 ping statistics --- 00:11:24.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.876 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:24.876 11:38:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.876 11:38:56 -- nvmf/common.sh@421 -- # return 0 00:11:24.876 11:38:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:24.876 11:38:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.876 11:38:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:24.876 11:38:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.876 11:38:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:24.876 11:38:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:24.876 11:38:56 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:24.876 11:38:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:24.876 11:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.876 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:24.876 ************************************ 00:11:24.876 START TEST nvmf_filesystem_no_in_capsule 00:11:24.876 ************************************ 00:11:24.876 11:38:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:11:24.876 11:38:56 -- target/filesystem.sh@47 -- # in_capsule=0 00:11:24.876 11:38:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:24.876 11:38:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:24.876 11:38:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.876 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:24.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.876 11:38:56 -- nvmf/common.sh@469 -- # nvmfpid=60758 00:11:24.876 11:38:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.876 11:38:56 -- nvmf/common.sh@470 -- # waitforlisten 60758 00:11:24.876 11:38:56 -- common/autotest_common.sh@829 -- # '[' -z 60758 ']' 00:11:24.876 11:38:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.876 11:38:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.876 11:38:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.876 11:38:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.876 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:24.876 [2024-11-20 11:38:56.769592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:24.876 [2024-11-20 11:38:56.769820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.876 [2024-11-20 11:38:56.913748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.876 [2024-11-20 11:38:57.034274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:24.876 [2024-11-20 11:38:57.034495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.877 [2024-11-20 11:38:57.034533] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.877 [2024-11-20 11:38:57.034582] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.877 [2024-11-20 11:38:57.034697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.877 [2024-11-20 11:38:57.035057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.877 [2024-11-20 11:38:57.035111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.877 [2024-11-20 11:38:57.035116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.877 11:38:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.877 11:38:57 -- common/autotest_common.sh@862 -- # return 0 00:11:24.877 11:38:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:24.877 11:38:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:24.877 11:38:57 -- common/autotest_common.sh@10 -- # set +x 00:11:24.877 11:38:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.877 11:38:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:24.877 11:38:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:24.877 11:38:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.877 11:38:57 -- common/autotest_common.sh@10 -- # set +x 00:11:24.877 [2024-11-20 11:38:57.852155] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.877 11:38:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.877 11:38:57 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:24.877 11:38:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.877 11:38:57 -- common/autotest_common.sh@10 -- # set +x 00:11:25.135 Malloc1 00:11:25.136 11:38:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.136 11:38:58 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.136 11:38:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.136 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.136 11:38:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.136 11:38:58 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:25.136 11:38:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.136 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.136 11:38:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.136 11:38:58 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.136 11:38:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.136 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.136 [2024-11-20 11:38:58.024479] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.136 11:38:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.136 11:38:58 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:25.136 11:38:58 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:11:25.136 11:38:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:11:25.136 11:38:58 -- common/autotest_common.sh@1369 -- # local bs 00:11:25.136 11:38:58 -- common/autotest_common.sh@1370 -- # local nb 00:11:25.136 11:38:58 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:25.136 11:38:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.136 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.136 11:38:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.136 11:38:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:11:25.136 { 00:11:25.136 "aliases": [ 00:11:25.136 "abf0ab88-5a1e-4bb5-9380-5eb1bddd7dbc" 00:11:25.136 ], 00:11:25.136 "assigned_rate_limits": { 00:11:25.136 "r_mbytes_per_sec": 0, 00:11:25.136 "rw_ios_per_sec": 0, 00:11:25.136 "rw_mbytes_per_sec": 0, 00:11:25.136 "w_mbytes_per_sec": 0 00:11:25.136 }, 00:11:25.136 "block_size": 512, 00:11:25.136 "claim_type": "exclusive_write", 00:11:25.136 "claimed": true, 00:11:25.136 "driver_specific": {}, 00:11:25.136 "memory_domains": [ 00:11:25.136 { 00:11:25.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.136 "dma_device_type": 2 00:11:25.136 } 00:11:25.136 ], 00:11:25.136 "name": "Malloc1", 00:11:25.136 "num_blocks": 1048576, 00:11:25.136 "product_name": "Malloc disk", 00:11:25.136 "supported_io_types": { 00:11:25.136 "abort": true, 00:11:25.136 "compare": false, 00:11:25.136 "compare_and_write": false, 00:11:25.136 "flush": true, 00:11:25.136 "nvme_admin": false, 00:11:25.136 "nvme_io": false, 00:11:25.136 "read": true, 00:11:25.136 "reset": true, 00:11:25.136 "unmap": true, 00:11:25.136 "write": true, 00:11:25.136 "write_zeroes": true 00:11:25.136 }, 00:11:25.136 "uuid": "abf0ab88-5a1e-4bb5-9380-5eb1bddd7dbc", 00:11:25.136 "zoned": false 00:11:25.136 } 00:11:25.136 ]' 00:11:25.136 11:38:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:11:25.136 11:38:58 -- common/autotest_common.sh@1372 -- # bs=512 00:11:25.136 11:38:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:11:25.136 11:38:58 -- common/autotest_common.sh@1373 -- # nb=1048576 00:11:25.136 11:38:58 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:11:25.136 11:38:58 -- common/autotest_common.sh@1377 -- # echo 512 00:11:25.136 11:38:58 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:25.136 11:38:58 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.395 11:38:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.395 11:38:58 -- common/autotest_common.sh@1187 -- # local i=0 00:11:25.395 11:38:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.395 11:38:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:25.395 11:38:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:27.298 11:39:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:27.298 11:39:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:27.299 11:39:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.299 11:39:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:27.299 11:39:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.299 11:39:00 -- common/autotest_common.sh@1197 -- # return 0 00:11:27.299 11:39:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:27.299 11:39:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:27.299 11:39:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:27.299 11:39:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:27.299 11:39:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:27.299 11:39:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:27.299 11:39:00 -- setup/common.sh@80 -- # echo 536870912 00:11:27.299 11:39:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:27.299 11:39:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:27.299 11:39:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:27.299 11:39:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:27.557 11:39:00 -- target/filesystem.sh@69 -- # partprobe 00:11:27.557 11:39:00 -- target/filesystem.sh@70 -- # sleep 1 00:11:28.496 11:39:01 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:28.496 11:39:01 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:28.496 11:39:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:28.496 11:39:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.496 11:39:01 -- common/autotest_common.sh@10 -- # set +x 00:11:28.496 ************************************ 00:11:28.496 START TEST filesystem_ext4 00:11:28.496 ************************************ 00:11:28.496 11:39:01 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:28.496 11:39:01 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:28.496 11:39:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.496 11:39:01 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:28.496 11:39:01 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:11:28.496 11:39:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:28.496 11:39:01 -- common/autotest_common.sh@914 -- # local i=0 00:11:28.496 11:39:01 -- common/autotest_common.sh@915 -- # local force 00:11:28.496 11:39:01 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:11:28.496 11:39:01 -- common/autotest_common.sh@918 -- # force=-F 00:11:28.496 11:39:01 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:28.496 mke2fs 1.47.0 (5-Feb-2023) 00:11:28.756 Discarding device blocks: 0/522240 done 00:11:28.756 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:28.756 Filesystem UUID: 741b11a6-744c-4604-93bb-d2375b291d19 00:11:28.756 Superblock backups stored on blocks: 00:11:28.756 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:28.756 00:11:28.756 Allocating group tables: 0/64 done 00:11:28.756 Writing inode tables: 0/64 done 00:11:28.756 Creating journal (8192 blocks): done 00:11:28.756 Writing superblocks and filesystem accounting information: 0/64 done 00:11:28.756 00:11:28.756 11:39:01 -- common/autotest_common.sh@931 -- # return 0 00:11:28.756 11:39:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.039 11:39:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.039 11:39:07 -- target/filesystem.sh@25 -- # sync 00:11:34.039 11:39:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.039 11:39:07 -- target/filesystem.sh@27 -- # sync 00:11:34.039 11:39:07 -- target/filesystem.sh@29 -- # i=0 00:11:34.039 11:39:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.039 11:39:07 -- target/filesystem.sh@37 -- # kill -0 60758 00:11:34.039 11:39:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.039 11:39:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.298 11:39:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.298 11:39:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.298 ************************************ 00:11:34.298 END TEST filesystem_ext4 00:11:34.298 ************************************ 00:11:34.298 00:11:34.298 real 0m5.633s 00:11:34.298 user 0m0.025s 00:11:34.298 sys 0m0.079s 00:11:34.298 11:39:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:34.298 11:39:07 -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 11:39:07 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:34.299 11:39:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:34.299 11:39:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.299 11:39:07 -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 ************************************ 00:11:34.299 START TEST filesystem_btrfs 00:11:34.299 ************************************ 00:11:34.299 11:39:07 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:34.299 11:39:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:34.299 11:39:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.299 11:39:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:34.299 11:39:07 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:11:34.299 11:39:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:34.299 11:39:07 -- common/autotest_common.sh@914 -- # local i=0 00:11:34.299 11:39:07 -- common/autotest_common.sh@915 -- # local force 00:11:34.299 11:39:07 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:11:34.299 11:39:07 -- common/autotest_common.sh@920 -- # force=-f 00:11:34.299 11:39:07 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:34.299 btrfs-progs v6.8.1 00:11:34.299 See https://btrfs.readthedocs.io for more information. 00:11:34.299 00:11:34.299 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:34.299 NOTE: several default settings have changed in version 5.15, please make sure 00:11:34.299 this does not affect your deployments: 00:11:34.299 - DUP for metadata (-m dup) 00:11:34.299 - enabled no-holes (-O no-holes) 00:11:34.299 - enabled free-space-tree (-R free-space-tree) 00:11:34.299 00:11:34.299 Label: (null) 00:11:34.299 UUID: c1d1124d-5e9b-44d5-bb08-9d0563a8d87e 00:11:34.299 Node size: 16384 00:11:34.299 Sector size: 4096 (CPU page size: 4096) 00:11:34.299 Filesystem size: 510.00MiB 00:11:34.299 Block group profiles: 00:11:34.299 Data: single 8.00MiB 00:11:34.299 Metadata: DUP 32.00MiB 00:11:34.299 System: DUP 8.00MiB 00:11:34.299 SSD detected: yes 00:11:34.299 Zoned device: no 00:11:34.299 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:34.299 Checksum: crc32c 00:11:34.299 Number of devices: 1 00:11:34.299 Devices: 00:11:34.299 ID SIZE PATH 00:11:34.299 1 510.00MiB /dev/nvme0n1p1 00:11:34.299 00:11:34.299 11:39:07 -- common/autotest_common.sh@931 -- # return 0 00:11:34.299 11:39:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.299 11:39:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.299 11:39:07 -- target/filesystem.sh@25 -- # sync 00:11:34.299 11:39:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.299 11:39:07 -- target/filesystem.sh@27 -- # sync 00:11:34.299 11:39:07 -- target/filesystem.sh@29 -- # i=0 00:11:34.299 11:39:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.299 11:39:07 -- target/filesystem.sh@37 -- # kill -0 60758 00:11:34.299 11:39:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.299 11:39:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.299 11:39:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.559 11:39:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.559 ************************************ 00:11:34.559 END TEST filesystem_btrfs 00:11:34.559 ************************************ 00:11:34.559 00:11:34.559 real 0m0.207s 00:11:34.559 user 0m0.018s 00:11:34.559 sys 0m0.057s 00:11:34.559 11:39:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:34.559 11:39:07 -- common/autotest_common.sh@10 -- # set +x 00:11:34.559 11:39:07 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:34.559 11:39:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:34.559 11:39:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.559 11:39:07 -- common/autotest_common.sh@10 -- # set +x 00:11:34.559 ************************************ 00:11:34.559 START TEST filesystem_xfs 00:11:34.559 ************************************ 00:11:34.559 11:39:07 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:11:34.559 11:39:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:34.559 11:39:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.559 11:39:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:34.559 11:39:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:11:34.559 11:39:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:34.559 11:39:07 -- common/autotest_common.sh@914 -- # local i=0 00:11:34.559 11:39:07 -- common/autotest_common.sh@915 -- # local force 00:11:34.559 11:39:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:11:34.559 11:39:07 -- common/autotest_common.sh@920 -- # force=-f 00:11:34.559 11:39:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:34.559 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:34.559 = sectsz=512 attr=2, projid32bit=1 00:11:34.559 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:34.559 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:34.559 data = bsize=4096 blocks=130560, imaxpct=25 00:11:34.559 = sunit=0 swidth=0 blks 00:11:34.559 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:34.559 log =internal log bsize=4096 blocks=16384, version=2 00:11:34.559 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:34.559 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:35.126 Discarding blocks...Done. 00:11:35.126 11:39:08 -- common/autotest_common.sh@931 -- # return 0 00:11:35.126 11:39:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.661 11:39:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.661 11:39:10 -- target/filesystem.sh@25 -- # sync 00:11:37.661 11:39:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.661 11:39:10 -- target/filesystem.sh@27 -- # sync 00:11:37.661 11:39:10 -- target/filesystem.sh@29 -- # i=0 00:11:37.661 11:39:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.661 11:39:10 -- target/filesystem.sh@37 -- # kill -0 60758 00:11:37.661 11:39:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.661 11:39:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.661 11:39:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.661 11:39:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.661 ************************************ 00:11:37.661 END TEST filesystem_xfs 00:11:37.661 ************************************ 00:11:37.661 00:11:37.661 real 0m3.065s 00:11:37.661 user 0m0.015s 00:11:37.661 sys 0m0.064s 00:11:37.661 11:39:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:37.661 11:39:10 -- common/autotest_common.sh@10 -- # set +x 00:11:37.661 11:39:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:37.661 11:39:10 -- target/filesystem.sh@93 -- # sync 00:11:37.661 11:39:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.661 11:39:10 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.661 11:39:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:37.661 11:39:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:37.661 11:39:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.661 11:39:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.661 11:39:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:37.661 11:39:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:37.661 11:39:10 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.661 11:39:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.661 11:39:10 -- common/autotest_common.sh@10 -- # set +x 00:11:37.661 11:39:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.661 11:39:10 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:37.661 11:39:10 -- target/filesystem.sh@101 -- # killprocess 60758 00:11:37.661 11:39:10 -- common/autotest_common.sh@936 -- # '[' -z 60758 ']' 00:11:37.661 11:39:10 -- common/autotest_common.sh@940 -- # kill -0 60758 00:11:37.661 11:39:10 -- common/autotest_common.sh@941 -- # uname 00:11:37.661 11:39:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:37.661 11:39:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60758 00:11:37.661 killing process with pid 60758 00:11:37.661 11:39:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:37.661 11:39:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:37.661 11:39:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60758' 00:11:37.661 11:39:10 -- common/autotest_common.sh@955 -- # kill 60758 00:11:37.661 11:39:10 -- common/autotest_common.sh@960 -- # wait 60758 00:11:38.231 ************************************ 00:11:38.231 END TEST nvmf_filesystem_no_in_capsule 00:11:38.231 ************************************ 00:11:38.231 11:39:10 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:38.231 00:11:38.231 real 0m14.295s 00:11:38.231 user 0m55.039s 00:11:38.231 sys 0m1.630s 00:11:38.231 11:39:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.231 11:39:11 -- common/autotest_common.sh@10 -- # set +x 00:11:38.231 11:39:11 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:38.231 11:39:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:38.231 11:39:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.231 11:39:11 -- common/autotest_common.sh@10 -- # set +x 00:11:38.231 ************************************ 00:11:38.231 START TEST nvmf_filesystem_in_capsule 00:11:38.231 ************************************ 00:11:38.231 11:39:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:11:38.231 11:39:11 -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:38.231 11:39:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:38.231 11:39:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:38.231 11:39:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:38.231 11:39:11 -- common/autotest_common.sh@10 -- # set +x 00:11:38.231 11:39:11 -- nvmf/common.sh@469 -- # nvmfpid=61125 00:11:38.231 11:39:11 -- nvmf/common.sh@470 -- # waitforlisten 61125 00:11:38.231 11:39:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.231 11:39:11 -- common/autotest_common.sh@829 -- # '[' -z 61125 ']' 00:11:38.231 11:39:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.231 11:39:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.231 11:39:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.231 11:39:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.231 11:39:11 -- common/autotest_common.sh@10 -- # set +x 00:11:38.231 [2024-11-20 11:39:11.130364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.231 [2024-11-20 11:39:11.130441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.231 [2024-11-20 11:39:11.269700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.491 [2024-11-20 11:39:11.376475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.491 [2024-11-20 11:39:11.376614] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.491 [2024-11-20 11:39:11.376622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.491 [2024-11-20 11:39:11.376628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.491 [2024-11-20 11:39:11.376747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.491 [2024-11-20 11:39:11.376998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.491 [2024-11-20 11:39:11.377088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.491 [2024-11-20 11:39:11.377091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.061 11:39:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.061 11:39:12 -- common/autotest_common.sh@862 -- # return 0 00:11:39.061 11:39:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:39.061 11:39:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:39.061 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 11:39:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.320 11:39:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:39.320 11:39:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:39.320 11:39:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.320 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 [2024-11-20 11:39:12.120694] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.320 11:39:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.320 11:39:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:39.320 11:39:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.320 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 Malloc1 00:11:39.320 11:39:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.320 11:39:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.320 11:39:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.320 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 11:39:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.320 11:39:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.320 11:39:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.320 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 11:39:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.320 11:39:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.320 11:39:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.320 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 [2024-11-20 11:39:12.296382] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.320 11:39:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.320 11:39:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:39.320 11:39:12 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:11:39.320 11:39:12 -- common/autotest_common.sh@1368 -- # local bdev_info 00:11:39.320 11:39:12 -- common/autotest_common.sh@1369 -- # local bs 00:11:39.320 11:39:12 -- common/autotest_common.sh@1370 -- # local nb 00:11:39.320 11:39:12 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:39.320 11:39:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.320 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:11:39.320 11:39:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.320 11:39:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:11:39.320 { 00:11:39.320 "aliases": [ 00:11:39.320 "5773d9f9-38eb-4802-bdcb-3d56fac5275f" 00:11:39.320 ], 00:11:39.320 "assigned_rate_limits": { 00:11:39.320 "r_mbytes_per_sec": 0, 00:11:39.320 "rw_ios_per_sec": 0, 00:11:39.320 "rw_mbytes_per_sec": 0, 00:11:39.320 "w_mbytes_per_sec": 0 00:11:39.320 }, 00:11:39.320 "block_size": 512, 00:11:39.320 "claim_type": "exclusive_write", 00:11:39.320 "claimed": true, 00:11:39.320 "driver_specific": {}, 00:11:39.320 "memory_domains": [ 00:11:39.320 { 00:11:39.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.320 "dma_device_type": 2 00:11:39.320 } 00:11:39.320 ], 00:11:39.320 "name": "Malloc1", 00:11:39.320 "num_blocks": 1048576, 00:11:39.321 "product_name": "Malloc disk", 00:11:39.321 "supported_io_types": { 00:11:39.321 "abort": true, 00:11:39.321 "compare": false, 00:11:39.321 "compare_and_write": false, 00:11:39.321 "flush": true, 00:11:39.321 "nvme_admin": false, 00:11:39.321 "nvme_io": false, 00:11:39.321 "read": true, 00:11:39.321 "reset": true, 00:11:39.321 "unmap": true, 00:11:39.321 "write": true, 00:11:39.321 "write_zeroes": true 00:11:39.321 }, 00:11:39.321 "uuid": "5773d9f9-38eb-4802-bdcb-3d56fac5275f", 00:11:39.321 "zoned": false 00:11:39.321 } 00:11:39.321 ]' 00:11:39.321 11:39:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:11:39.580 11:39:12 -- common/autotest_common.sh@1372 -- # bs=512 00:11:39.580 11:39:12 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:11:39.580 11:39:12 -- common/autotest_common.sh@1373 -- # nb=1048576 00:11:39.580 11:39:12 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:11:39.580 11:39:12 -- common/autotest_common.sh@1377 -- # echo 512 00:11:39.580 11:39:12 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:39.580 11:39:12 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.580 11:39:12 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.580 11:39:12 -- common/autotest_common.sh@1187 -- # local i=0 00:11:39.580 11:39:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.580 11:39:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:39.580 11:39:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:42.118 11:39:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:42.118 11:39:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:42.118 11:39:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.118 11:39:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:42.118 11:39:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.118 11:39:14 -- common/autotest_common.sh@1197 -- # return 0 00:11:42.118 11:39:14 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:42.118 11:39:14 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:42.118 11:39:14 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:42.118 11:39:14 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:42.118 11:39:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:42.119 11:39:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:42.119 11:39:14 -- setup/common.sh@80 -- # echo 536870912 00:11:42.119 11:39:14 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:42.119 11:39:14 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:42.119 11:39:14 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:42.119 11:39:14 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:42.119 11:39:14 -- target/filesystem.sh@69 -- # partprobe 00:11:42.119 11:39:14 -- target/filesystem.sh@70 -- # sleep 1 00:11:43.062 11:39:15 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:43.062 11:39:15 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:43.062 11:39:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:43.062 11:39:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.062 11:39:15 -- common/autotest_common.sh@10 -- # set +x 00:11:43.062 ************************************ 00:11:43.062 START TEST filesystem_in_capsule_ext4 00:11:43.062 ************************************ 00:11:43.062 11:39:15 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:43.062 11:39:15 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:43.062 11:39:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.062 11:39:15 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:43.062 11:39:15 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:11:43.062 11:39:15 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:43.062 11:39:15 -- common/autotest_common.sh@914 -- # local i=0 00:11:43.062 11:39:15 -- common/autotest_common.sh@915 -- # local force 00:11:43.062 11:39:15 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:11:43.062 11:39:15 -- common/autotest_common.sh@918 -- # force=-F 00:11:43.062 11:39:15 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:43.062 mke2fs 1.47.0 (5-Feb-2023) 00:11:43.062 Discarding device blocks: 0/522240 done 00:11:43.062 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:43.062 Filesystem UUID: 1dbc01bd-af15-444e-a238-fed953303801 00:11:43.062 Superblock backups stored on blocks: 00:11:43.062 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:43.062 00:11:43.062 Allocating group tables: 0/64 done 00:11:43.062 Writing inode tables: 0/64 done 00:11:43.062 Creating journal (8192 blocks): done 00:11:43.062 Writing superblocks and filesystem accounting information: 0/64 done 00:11:43.062 00:11:43.062 11:39:15 -- common/autotest_common.sh@931 -- # return 0 00:11:43.062 11:39:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.354 11:39:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.354 11:39:21 -- target/filesystem.sh@25 -- # sync 00:11:48.354 11:39:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.354 11:39:21 -- target/filesystem.sh@27 -- # sync 00:11:48.354 11:39:21 -- target/filesystem.sh@29 -- # i=0 00:11:48.354 11:39:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.354 11:39:21 -- target/filesystem.sh@37 -- # kill -0 61125 00:11:48.354 11:39:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.354 11:39:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.354 11:39:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.354 11:39:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.354 00:11:48.354 real 0m5.507s 00:11:48.354 user 0m0.020s 00:11:48.354 sys 0m0.066s 00:11:48.354 11:39:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:48.354 11:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:48.354 ************************************ 00:11:48.354 END TEST filesystem_in_capsule_ext4 00:11:48.354 ************************************ 00:11:48.354 11:39:21 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:48.354 11:39:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:48.354 11:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.354 11:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:48.354 ************************************ 00:11:48.354 START TEST filesystem_in_capsule_btrfs 00:11:48.354 ************************************ 00:11:48.354 11:39:21 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:48.354 11:39:21 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:48.354 11:39:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.354 11:39:21 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:48.354 11:39:21 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:11:48.354 11:39:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:48.354 11:39:21 -- common/autotest_common.sh@914 -- # local i=0 00:11:48.354 11:39:21 -- common/autotest_common.sh@915 -- # local force 00:11:48.354 11:39:21 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:11:48.354 11:39:21 -- common/autotest_common.sh@920 -- # force=-f 00:11:48.354 11:39:21 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.613 btrfs-progs v6.8.1 00:11:48.613 See https://btrfs.readthedocs.io for more information. 00:11:48.613 00:11:48.613 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.613 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.613 this does not affect your deployments: 00:11:48.613 - DUP for metadata (-m dup) 00:11:48.613 - enabled no-holes (-O no-holes) 00:11:48.613 - enabled free-space-tree (-R free-space-tree) 00:11:48.613 00:11:48.613 Label: (null) 00:11:48.613 UUID: 5315b617-c30c-4cff-ac3b-b08b27b10ae3 00:11:48.613 Node size: 16384 00:11:48.613 Sector size: 4096 (CPU page size: 4096) 00:11:48.613 Filesystem size: 510.00MiB 00:11:48.613 Block group profiles: 00:11:48.613 Data: single 8.00MiB 00:11:48.613 Metadata: DUP 32.00MiB 00:11:48.613 System: DUP 8.00MiB 00:11:48.613 SSD detected: yes 00:11:48.613 Zoned device: no 00:11:48.613 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.613 Checksum: crc32c 00:11:48.613 Number of devices: 1 00:11:48.613 Devices: 00:11:48.613 ID SIZE PATH 00:11:48.613 1 510.00MiB /dev/nvme0n1p1 00:11:48.613 00:11:48.613 11:39:21 -- common/autotest_common.sh@931 -- # return 0 00:11:48.613 11:39:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.613 11:39:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.613 11:39:21 -- target/filesystem.sh@25 -- # sync 00:11:48.613 11:39:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.613 11:39:21 -- target/filesystem.sh@27 -- # sync 00:11:48.613 11:39:21 -- target/filesystem.sh@29 -- # i=0 00:11:48.613 11:39:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.613 11:39:21 -- target/filesystem.sh@37 -- # kill -0 61125 00:11:48.613 11:39:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.613 11:39:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.613 11:39:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.613 11:39:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.613 00:11:48.613 real 0m0.238s 00:11:48.613 user 0m0.035s 00:11:48.613 sys 0m0.081s 00:11:48.613 11:39:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:48.613 11:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 ************************************ 00:11:48.613 END TEST filesystem_in_capsule_btrfs 00:11:48.613 ************************************ 00:11:48.613 11:39:21 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:48.613 11:39:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:48.613 11:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.613 11:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 ************************************ 00:11:48.613 START TEST filesystem_in_capsule_xfs 00:11:48.613 ************************************ 00:11:48.613 11:39:21 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:11:48.613 11:39:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:48.613 11:39:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.613 11:39:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:48.613 11:39:21 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:11:48.613 11:39:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:48.613 11:39:21 -- common/autotest_common.sh@914 -- # local i=0 00:11:48.613 11:39:21 -- common/autotest_common.sh@915 -- # local force 00:11:48.613 11:39:21 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:11:48.613 11:39:21 -- common/autotest_common.sh@920 -- # force=-f 00:11:48.613 11:39:21 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:48.872 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:48.872 = sectsz=512 attr=2, projid32bit=1 00:11:48.872 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:48.872 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:48.872 data = bsize=4096 blocks=130560, imaxpct=25 00:11:48.872 = sunit=0 swidth=0 blks 00:11:48.872 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:48.872 log =internal log bsize=4096 blocks=16384, version=2 00:11:48.872 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:48.872 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:49.439 Discarding blocks...Done. 00:11:49.439 11:39:22 -- common/autotest_common.sh@931 -- # return 0 00:11:49.439 11:39:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.344 11:39:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.344 11:39:24 -- target/filesystem.sh@25 -- # sync 00:11:51.344 11:39:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.344 11:39:24 -- target/filesystem.sh@27 -- # sync 00:11:51.344 11:39:24 -- target/filesystem.sh@29 -- # i=0 00:11:51.344 11:39:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.344 11:39:24 -- target/filesystem.sh@37 -- # kill -0 61125 00:11:51.344 11:39:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.344 11:39:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.344 11:39:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.344 11:39:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.344 00:11:51.344 real 0m2.638s 00:11:51.344 user 0m0.024s 00:11:51.344 sys 0m0.072s 00:11:51.344 11:39:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:51.344 11:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:51.344 ************************************ 00:11:51.344 END TEST filesystem_in_capsule_xfs 00:11:51.344 ************************************ 00:11:51.344 11:39:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:51.344 11:39:24 -- target/filesystem.sh@93 -- # sync 00:11:51.344 11:39:24 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.603 11:39:24 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.603 11:39:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.603 11:39:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.603 11:39:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.603 11:39:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.603 11:39:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.603 11:39:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.603 11:39:24 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.603 11:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.603 11:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:51.603 11:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.603 11:39:24 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:51.603 11:39:24 -- target/filesystem.sh@101 -- # killprocess 61125 00:11:51.603 11:39:24 -- common/autotest_common.sh@936 -- # '[' -z 61125 ']' 00:11:51.603 11:39:24 -- common/autotest_common.sh@940 -- # kill -0 61125 00:11:51.603 11:39:24 -- common/autotest_common.sh@941 -- # uname 00:11:51.603 11:39:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.603 11:39:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61125 00:11:51.603 11:39:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:51.603 11:39:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:51.603 11:39:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61125' 00:11:51.603 killing process with pid 61125 00:11:51.603 11:39:24 -- common/autotest_common.sh@955 -- # kill 61125 00:11:51.603 11:39:24 -- common/autotest_common.sh@960 -- # wait 61125 00:11:52.184 11:39:24 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:52.184 00:11:52.184 real 0m13.912s 00:11:52.184 user 0m53.526s 00:11:52.184 sys 0m1.706s 00:11:52.184 11:39:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.184 11:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:52.184 ************************************ 00:11:52.184 END TEST nvmf_filesystem_in_capsule 00:11:52.184 ************************************ 00:11:52.184 11:39:25 -- target/filesystem.sh@108 -- # nvmftestfini 00:11:52.184 11:39:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:52.184 11:39:25 -- nvmf/common.sh@116 -- # sync 00:11:52.184 11:39:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:52.184 11:39:25 -- nvmf/common.sh@119 -- # set +e 00:11:52.184 11:39:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:52.184 11:39:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:52.184 rmmod nvme_tcp 00:11:52.184 rmmod nvme_fabrics 00:11:52.184 rmmod nvme_keyring 00:11:52.184 11:39:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:52.184 11:39:25 -- nvmf/common.sh@123 -- # set -e 00:11:52.184 11:39:25 -- nvmf/common.sh@124 -- # return 0 00:11:52.184 11:39:25 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:11:52.184 11:39:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:52.184 11:39:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:52.184 11:39:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:52.184 11:39:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.184 11:39:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:52.184 11:39:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.184 11:39:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.184 11:39:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.184 11:39:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:52.184 00:11:52.184 real 0m29.114s 00:11:52.184 user 1m48.901s 00:11:52.184 sys 0m3.736s 00:11:52.184 11:39:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.184 11:39:25 -- common/autotest_common.sh@10 -- # set +x 00:11:52.184 ************************************ 00:11:52.184 END TEST nvmf_filesystem 00:11:52.184 ************************************ 00:11:52.444 11:39:25 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:52.444 11:39:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.444 11:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.444 11:39:25 -- common/autotest_common.sh@10 -- # set +x 00:11:52.444 ************************************ 00:11:52.444 START TEST nvmf_discovery 00:11:52.444 ************************************ 00:11:52.444 11:39:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:52.444 * Looking for test storage... 00:11:52.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.444 11:39:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:52.444 11:39:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:52.444 11:39:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:52.444 11:39:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:52.444 11:39:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:52.444 11:39:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:52.444 11:39:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:52.444 11:39:25 -- scripts/common.sh@335 -- # IFS=.-: 00:11:52.444 11:39:25 -- scripts/common.sh@335 -- # read -ra ver1 00:11:52.444 11:39:25 -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.444 11:39:25 -- scripts/common.sh@336 -- # read -ra ver2 00:11:52.444 11:39:25 -- scripts/common.sh@337 -- # local 'op=<' 00:11:52.444 11:39:25 -- scripts/common.sh@339 -- # ver1_l=2 00:11:52.444 11:39:25 -- scripts/common.sh@340 -- # ver2_l=1 00:11:52.444 11:39:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:52.444 11:39:25 -- scripts/common.sh@343 -- # case "$op" in 00:11:52.444 11:39:25 -- scripts/common.sh@344 -- # : 1 00:11:52.444 11:39:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:52.444 11:39:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.444 11:39:25 -- scripts/common.sh@364 -- # decimal 1 00:11:52.444 11:39:25 -- scripts/common.sh@352 -- # local d=1 00:11:52.444 11:39:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.444 11:39:25 -- scripts/common.sh@354 -- # echo 1 00:11:52.444 11:39:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:52.444 11:39:25 -- scripts/common.sh@365 -- # decimal 2 00:11:52.444 11:39:25 -- scripts/common.sh@352 -- # local d=2 00:11:52.444 11:39:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.444 11:39:25 -- scripts/common.sh@354 -- # echo 2 00:11:52.444 11:39:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:52.444 11:39:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:52.444 11:39:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:52.444 11:39:25 -- scripts/common.sh@367 -- # return 0 00:11:52.444 11:39:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.444 11:39:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.444 --rc genhtml_branch_coverage=1 00:11:52.444 --rc genhtml_function_coverage=1 00:11:52.444 --rc genhtml_legend=1 00:11:52.444 --rc geninfo_all_blocks=1 00:11:52.444 --rc geninfo_unexecuted_blocks=1 00:11:52.444 00:11:52.444 ' 00:11:52.444 11:39:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.444 --rc genhtml_branch_coverage=1 00:11:52.444 --rc genhtml_function_coverage=1 00:11:52.444 --rc genhtml_legend=1 00:11:52.444 --rc geninfo_all_blocks=1 00:11:52.444 --rc geninfo_unexecuted_blocks=1 00:11:52.444 00:11:52.444 ' 00:11:52.444 11:39:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.444 --rc genhtml_branch_coverage=1 00:11:52.444 --rc genhtml_function_coverage=1 00:11:52.444 --rc genhtml_legend=1 00:11:52.444 --rc geninfo_all_blocks=1 00:11:52.444 --rc geninfo_unexecuted_blocks=1 00:11:52.444 00:11:52.444 ' 00:11:52.444 11:39:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.444 --rc genhtml_branch_coverage=1 00:11:52.444 --rc genhtml_function_coverage=1 00:11:52.444 --rc genhtml_legend=1 00:11:52.444 --rc geninfo_all_blocks=1 00:11:52.444 --rc geninfo_unexecuted_blocks=1 00:11:52.444 00:11:52.444 ' 00:11:52.444 11:39:25 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.444 11:39:25 -- nvmf/common.sh@7 -- # uname -s 00:11:52.703 11:39:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.703 11:39:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.703 11:39:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.703 11:39:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.703 11:39:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.703 11:39:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.703 11:39:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.703 11:39:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.703 11:39:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.703 11:39:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.703 11:39:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:11:52.703 11:39:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:11:52.703 11:39:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.703 11:39:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.703 11:39:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.703 11:39:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.703 11:39:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.703 11:39:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.703 11:39:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.703 11:39:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.703 11:39:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.704 11:39:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.704 11:39:25 -- paths/export.sh@5 -- # export PATH 00:11:52.704 11:39:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.704 11:39:25 -- nvmf/common.sh@46 -- # : 0 00:11:52.704 11:39:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:52.704 11:39:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:52.704 11:39:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:52.704 11:39:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.704 11:39:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.704 11:39:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:52.704 11:39:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:52.704 11:39:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:52.704 11:39:25 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:52.704 11:39:25 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:52.704 11:39:25 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:52.704 11:39:25 -- target/discovery.sh@15 -- # hash nvme 00:11:52.704 11:39:25 -- target/discovery.sh@20 -- # nvmftestinit 00:11:52.704 11:39:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:52.704 11:39:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.704 11:39:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:52.704 11:39:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:52.704 11:39:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:52.704 11:39:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.704 11:39:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.704 11:39:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.704 11:39:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:52.704 11:39:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:52.704 11:39:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:52.704 11:39:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:52.704 11:39:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:52.704 11:39:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:52.704 11:39:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.704 11:39:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.704 11:39:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:52.704 11:39:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:52.704 11:39:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.704 11:39:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.704 11:39:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.704 11:39:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.704 11:39:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.704 11:39:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.704 11:39:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.704 11:39:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.704 11:39:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:52.704 11:39:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:52.704 Cannot find device "nvmf_tgt_br" 00:11:52.704 11:39:25 -- nvmf/common.sh@154 -- # true 00:11:52.704 11:39:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.704 Cannot find device "nvmf_tgt_br2" 00:11:52.704 11:39:25 -- nvmf/common.sh@155 -- # true 00:11:52.704 11:39:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:52.704 11:39:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:52.704 Cannot find device "nvmf_tgt_br" 00:11:52.704 11:39:25 -- nvmf/common.sh@157 -- # true 00:11:52.704 11:39:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:52.704 Cannot find device "nvmf_tgt_br2" 00:11:52.704 11:39:25 -- nvmf/common.sh@158 -- # true 00:11:52.704 11:39:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:52.704 11:39:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:52.704 11:39:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.704 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.704 11:39:25 -- nvmf/common.sh@161 -- # true 00:11:52.704 11:39:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.704 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.704 11:39:25 -- nvmf/common.sh@162 -- # true 00:11:52.704 11:39:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.704 11:39:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.704 11:39:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.704 11:39:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.704 11:39:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.964 11:39:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.964 11:39:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.964 11:39:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.964 11:39:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.964 11:39:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:52.964 11:39:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:52.964 11:39:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:52.964 11:39:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:52.964 11:39:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.964 11:39:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.964 11:39:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.964 11:39:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:52.964 11:39:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:52.964 11:39:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:52.964 11:39:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:52.964 11:39:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:52.964 11:39:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:52.964 11:39:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:52.964 11:39:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:52.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:11:52.964 00:11:52.964 --- 10.0.0.2 ping statistics --- 00:11:52.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.964 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:52.964 11:39:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:52.964 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:52.964 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:52.964 00:11:52.964 --- 10.0.0.3 ping statistics --- 00:11:52.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.964 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:52.964 11:39:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:52.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:52.964 00:11:52.964 --- 10.0.0.1 ping statistics --- 00:11:52.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.964 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:52.964 11:39:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.964 11:39:25 -- nvmf/common.sh@421 -- # return 0 00:11:52.964 11:39:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:52.964 11:39:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.964 11:39:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:52.964 11:39:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:52.964 11:39:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.964 11:39:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:52.964 11:39:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:52.964 11:39:25 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:52.964 11:39:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:52.964 11:39:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.964 11:39:25 -- common/autotest_common.sh@10 -- # set +x 00:11:52.964 11:39:25 -- nvmf/common.sh@469 -- # nvmfpid=61675 00:11:52.964 11:39:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.964 11:39:25 -- nvmf/common.sh@470 -- # waitforlisten 61675 00:11:52.964 11:39:25 -- common/autotest_common.sh@829 -- # '[' -z 61675 ']' 00:11:52.964 11:39:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.964 11:39:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.964 11:39:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.964 11:39:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.964 11:39:25 -- common/autotest_common.sh@10 -- # set +x 00:11:52.964 [2024-11-20 11:39:25.984185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:52.964 [2024-11-20 11:39:25.984264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.223 [2024-11-20 11:39:26.125639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.223 [2024-11-20 11:39:26.235328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:53.223 [2024-11-20 11:39:26.235478] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.223 [2024-11-20 11:39:26.235487] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.223 [2024-11-20 11:39:26.235493] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.223 [2024-11-20 11:39:26.235762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.223 [2024-11-20 11:39:26.235958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.223 [2024-11-20 11:39:26.236247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.223 [2024-11-20 11:39:26.236280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.163 11:39:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.163 11:39:26 -- common/autotest_common.sh@862 -- # return 0 00:11:54.163 11:39:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:54.163 11:39:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:54.163 11:39:26 -- common/autotest_common.sh@10 -- # set +x 00:11:54.163 11:39:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.163 11:39:26 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.163 11:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.163 11:39:26 -- common/autotest_common.sh@10 -- # set +x 00:11:54.163 [2024-11-20 11:39:27.009067] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.163 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.163 11:39:27 -- target/discovery.sh@26 -- # seq 1 4 00:11:54.164 11:39:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.164 11:39:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 Null1 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 [2024-11-20 11:39:27.082907] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.164 11:39:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 Null2 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.164 11:39:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 Null3 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.164 11:39:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 Null4 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.164 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.164 11:39:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:54.164 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.164 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 4420 00:11:54.425 00:11:54.425 Discovery Log Number of Records 6, Generation counter 6 00:11:54.425 =====Discovery Log Entry 0====== 00:11:54.425 trtype: tcp 00:11:54.425 adrfam: ipv4 00:11:54.425 subtype: current discovery subsystem 00:11:54.425 treq: not required 00:11:54.425 portid: 0 00:11:54.425 trsvcid: 4420 00:11:54.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.425 traddr: 10.0.0.2 00:11:54.425 eflags: explicit discovery connections, duplicate discovery information 00:11:54.425 sectype: none 00:11:54.425 =====Discovery Log Entry 1====== 00:11:54.425 trtype: tcp 00:11:54.425 adrfam: ipv4 00:11:54.425 subtype: nvme subsystem 00:11:54.425 treq: not required 00:11:54.425 portid: 0 00:11:54.425 trsvcid: 4420 00:11:54.425 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:54.425 traddr: 10.0.0.2 00:11:54.425 eflags: none 00:11:54.425 sectype: none 00:11:54.425 =====Discovery Log Entry 2====== 00:11:54.425 trtype: tcp 00:11:54.425 adrfam: ipv4 00:11:54.425 subtype: nvme subsystem 00:11:54.425 treq: not required 00:11:54.425 portid: 0 00:11:54.425 trsvcid: 4420 00:11:54.425 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:54.425 traddr: 10.0.0.2 00:11:54.425 eflags: none 00:11:54.425 sectype: none 00:11:54.425 =====Discovery Log Entry 3====== 00:11:54.425 trtype: tcp 00:11:54.425 adrfam: ipv4 00:11:54.425 subtype: nvme subsystem 00:11:54.425 treq: not required 00:11:54.425 portid: 0 00:11:54.425 trsvcid: 4420 00:11:54.425 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:54.425 traddr: 10.0.0.2 00:11:54.425 eflags: none 00:11:54.425 sectype: none 00:11:54.425 =====Discovery Log Entry 4====== 00:11:54.425 trtype: tcp 00:11:54.425 adrfam: ipv4 00:11:54.425 subtype: nvme subsystem 00:11:54.425 treq: not required 00:11:54.425 portid: 0 00:11:54.425 trsvcid: 4420 00:11:54.425 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:54.425 traddr: 10.0.0.2 00:11:54.425 eflags: none 00:11:54.425 sectype: none 00:11:54.425 =====Discovery Log Entry 5====== 00:11:54.425 trtype: tcp 00:11:54.425 adrfam: ipv4 00:11:54.425 subtype: discovery subsystem referral 00:11:54.425 treq: not required 00:11:54.425 portid: 0 00:11:54.425 trsvcid: 4430 00:11:54.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.425 traddr: 10.0.0.2 00:11:54.425 eflags: none 00:11:54.425 sectype: none 00:11:54.425 Perform nvmf subsystem discovery via RPC 00:11:54.425 11:39:27 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:54.425 11:39:27 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 [2024-11-20 11:39:27.346815] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:54.425 [ 00:11:54.425 { 00:11:54.425 "allow_any_host": true, 00:11:54.425 "hosts": [], 00:11:54.425 "listen_addresses": [ 00:11:54.425 { 00:11:54.425 "adrfam": "IPv4", 00:11:54.425 "traddr": "10.0.0.2", 00:11:54.425 "transport": "TCP", 00:11:54.425 "trsvcid": "4420", 00:11:54.425 "trtype": "TCP" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:54.425 "subtype": "Discovery" 00:11:54.425 }, 00:11:54.425 { 00:11:54.425 "allow_any_host": true, 00:11:54.425 "hosts": [], 00:11:54.425 "listen_addresses": [ 00:11:54.425 { 00:11:54.425 "adrfam": "IPv4", 00:11:54.425 "traddr": "10.0.0.2", 00:11:54.425 "transport": "TCP", 00:11:54.425 "trsvcid": "4420", 00:11:54.425 "trtype": "TCP" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "max_cntlid": 65519, 00:11:54.425 "max_namespaces": 32, 00:11:54.425 "min_cntlid": 1, 00:11:54.425 "model_number": "SPDK bdev Controller", 00:11:54.425 "namespaces": [ 00:11:54.425 { 00:11:54.425 "bdev_name": "Null1", 00:11:54.425 "name": "Null1", 00:11:54.425 "nguid": "6A0C874CABB4401BACEC0B820A8D5922", 00:11:54.425 "nsid": 1, 00:11:54.425 "uuid": "6a0c874c-abb4-401b-acec-0b820a8d5922" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.425 "serial_number": "SPDK00000000000001", 00:11:54.425 "subtype": "NVMe" 00:11:54.425 }, 00:11:54.425 { 00:11:54.425 "allow_any_host": true, 00:11:54.425 "hosts": [], 00:11:54.425 "listen_addresses": [ 00:11:54.425 { 00:11:54.425 "adrfam": "IPv4", 00:11:54.425 "traddr": "10.0.0.2", 00:11:54.425 "transport": "TCP", 00:11:54.425 "trsvcid": "4420", 00:11:54.425 "trtype": "TCP" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "max_cntlid": 65519, 00:11:54.425 "max_namespaces": 32, 00:11:54.425 "min_cntlid": 1, 00:11:54.425 "model_number": "SPDK bdev Controller", 00:11:54.425 "namespaces": [ 00:11:54.425 { 00:11:54.425 "bdev_name": "Null2", 00:11:54.425 "name": "Null2", 00:11:54.425 "nguid": "4EAB999CD8024BC8832B069A4451C1F8", 00:11:54.425 "nsid": 1, 00:11:54.425 "uuid": "4eab999c-d802-4bc8-832b-069a4451c1f8" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:54.425 "serial_number": "SPDK00000000000002", 00:11:54.425 "subtype": "NVMe" 00:11:54.425 }, 00:11:54.425 { 00:11:54.425 "allow_any_host": true, 00:11:54.425 "hosts": [], 00:11:54.425 "listen_addresses": [ 00:11:54.425 { 00:11:54.425 "adrfam": "IPv4", 00:11:54.425 "traddr": "10.0.0.2", 00:11:54.425 "transport": "TCP", 00:11:54.425 "trsvcid": "4420", 00:11:54.425 "trtype": "TCP" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "max_cntlid": 65519, 00:11:54.425 "max_namespaces": 32, 00:11:54.425 "min_cntlid": 1, 00:11:54.425 "model_number": "SPDK bdev Controller", 00:11:54.425 "namespaces": [ 00:11:54.425 { 00:11:54.425 "bdev_name": "Null3", 00:11:54.425 "name": "Null3", 00:11:54.425 "nguid": "CDBF90783EEE46EE989F7E537275864E", 00:11:54.425 "nsid": 1, 00:11:54.425 "uuid": "cdbf9078-3eee-46ee-989f-7e537275864e" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:54.425 "serial_number": "SPDK00000000000003", 00:11:54.425 "subtype": "NVMe" 00:11:54.425 }, 00:11:54.425 { 00:11:54.425 "allow_any_host": true, 00:11:54.425 "hosts": [], 00:11:54.425 "listen_addresses": [ 00:11:54.425 { 00:11:54.425 "adrfam": "IPv4", 00:11:54.425 "traddr": "10.0.0.2", 00:11:54.425 "transport": "TCP", 00:11:54.425 "trsvcid": "4420", 00:11:54.425 "trtype": "TCP" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "max_cntlid": 65519, 00:11:54.425 "max_namespaces": 32, 00:11:54.425 "min_cntlid": 1, 00:11:54.425 "model_number": "SPDK bdev Controller", 00:11:54.425 "namespaces": [ 00:11:54.425 { 00:11:54.425 "bdev_name": "Null4", 00:11:54.425 "name": "Null4", 00:11:54.425 "nguid": "2E1AF664706047B69FC819311B1D9493", 00:11:54.425 "nsid": 1, 00:11:54.425 "uuid": "2e1af664-7060-47b6-9fc8-19311b1d9493" 00:11:54.425 } 00:11:54.425 ], 00:11:54.425 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:54.425 "serial_number": "SPDK00000000000004", 00:11:54.425 "subtype": "NVMe" 00:11:54.425 } 00:11:54.425 ] 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@42 -- # seq 1 4 00:11:54.425 11:39:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.425 11:39:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.425 11:39:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.425 11:39:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:54.425 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.425 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.426 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.426 11:39:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.426 11:39:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:54.426 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.426 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.426 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.426 11:39:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:54.426 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.426 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.426 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.426 11:39:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.426 11:39:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:54.426 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.426 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.426 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.426 11:39:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:54.426 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.426 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.426 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.426 11:39:27 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.426 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.426 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.426 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.426 11:39:27 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:54.426 11:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.426 11:39:27 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:54.426 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.685 11:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.685 11:39:27 -- target/discovery.sh@49 -- # check_bdevs= 00:11:54.685 11:39:27 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:54.685 11:39:27 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:54.685 11:39:27 -- target/discovery.sh@57 -- # nvmftestfini 00:11:54.685 11:39:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:54.685 11:39:27 -- nvmf/common.sh@116 -- # sync 00:11:54.685 11:39:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:54.685 11:39:27 -- nvmf/common.sh@119 -- # set +e 00:11:54.685 11:39:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:54.685 11:39:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:54.685 rmmod nvme_tcp 00:11:54.685 rmmod nvme_fabrics 00:11:54.685 rmmod nvme_keyring 00:11:54.685 11:39:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:54.685 11:39:27 -- nvmf/common.sh@123 -- # set -e 00:11:54.685 11:39:27 -- nvmf/common.sh@124 -- # return 0 00:11:54.685 11:39:27 -- nvmf/common.sh@477 -- # '[' -n 61675 ']' 00:11:54.685 11:39:27 -- nvmf/common.sh@478 -- # killprocess 61675 00:11:54.685 11:39:27 -- common/autotest_common.sh@936 -- # '[' -z 61675 ']' 00:11:54.685 11:39:27 -- common/autotest_common.sh@940 -- # kill -0 61675 00:11:54.685 11:39:27 -- common/autotest_common.sh@941 -- # uname 00:11:54.685 11:39:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.685 11:39:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61675 00:11:54.685 11:39:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.685 11:39:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.685 11:39:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61675' 00:11:54.685 killing process with pid 61675 00:11:54.685 11:39:27 -- common/autotest_common.sh@955 -- # kill 61675 00:11:54.685 [2024-11-20 11:39:27.641248] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transpor 11:39:27 -- common/autotest_common.sh@960 -- # wait 61675 00:11:54.685 t is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:54.944 11:39:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:54.944 11:39:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:54.944 11:39:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:54.944 11:39:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.944 11:39:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:54.944 11:39:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.944 11:39:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.944 11:39:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.944 11:39:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:54.944 00:11:54.944 real 0m2.646s 00:11:54.944 user 0m6.860s 00:11:54.944 sys 0m0.735s 00:11:54.944 11:39:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.944 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.944 ************************************ 00:11:54.944 END TEST nvmf_discovery 00:11:54.944 ************************************ 00:11:54.944 11:39:27 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:54.944 11:39:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:54.944 11:39:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.944 11:39:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.944 ************************************ 00:11:54.944 START TEST nvmf_referrals 00:11:54.944 ************************************ 00:11:54.944 11:39:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.204 * Looking for test storage... 00:11:55.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.204 11:39:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:55.204 11:39:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:55.204 11:39:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:55.204 11:39:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:55.204 11:39:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:55.204 11:39:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:55.204 11:39:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:55.204 11:39:28 -- scripts/common.sh@335 -- # IFS=.-: 00:11:55.204 11:39:28 -- scripts/common.sh@335 -- # read -ra ver1 00:11:55.204 11:39:28 -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.204 11:39:28 -- scripts/common.sh@336 -- # read -ra ver2 00:11:55.204 11:39:28 -- scripts/common.sh@337 -- # local 'op=<' 00:11:55.204 11:39:28 -- scripts/common.sh@339 -- # ver1_l=2 00:11:55.204 11:39:28 -- scripts/common.sh@340 -- # ver2_l=1 00:11:55.204 11:39:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:55.204 11:39:28 -- scripts/common.sh@343 -- # case "$op" in 00:11:55.204 11:39:28 -- scripts/common.sh@344 -- # : 1 00:11:55.204 11:39:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:55.204 11:39:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.204 11:39:28 -- scripts/common.sh@364 -- # decimal 1 00:11:55.204 11:39:28 -- scripts/common.sh@352 -- # local d=1 00:11:55.204 11:39:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.204 11:39:28 -- scripts/common.sh@354 -- # echo 1 00:11:55.204 11:39:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:55.204 11:39:28 -- scripts/common.sh@365 -- # decimal 2 00:11:55.204 11:39:28 -- scripts/common.sh@352 -- # local d=2 00:11:55.204 11:39:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.204 11:39:28 -- scripts/common.sh@354 -- # echo 2 00:11:55.204 11:39:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:55.204 11:39:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:55.204 11:39:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:55.204 11:39:28 -- scripts/common.sh@367 -- # return 0 00:11:55.204 11:39:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.204 11:39:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:55.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.204 --rc genhtml_branch_coverage=1 00:11:55.204 --rc genhtml_function_coverage=1 00:11:55.204 --rc genhtml_legend=1 00:11:55.204 --rc geninfo_all_blocks=1 00:11:55.204 --rc geninfo_unexecuted_blocks=1 00:11:55.204 00:11:55.204 ' 00:11:55.204 11:39:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:55.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.204 --rc genhtml_branch_coverage=1 00:11:55.204 --rc genhtml_function_coverage=1 00:11:55.204 --rc genhtml_legend=1 00:11:55.204 --rc geninfo_all_blocks=1 00:11:55.204 --rc geninfo_unexecuted_blocks=1 00:11:55.204 00:11:55.204 ' 00:11:55.204 11:39:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:55.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.204 --rc genhtml_branch_coverage=1 00:11:55.204 --rc genhtml_function_coverage=1 00:11:55.204 --rc genhtml_legend=1 00:11:55.204 --rc geninfo_all_blocks=1 00:11:55.204 --rc geninfo_unexecuted_blocks=1 00:11:55.204 00:11:55.204 ' 00:11:55.204 11:39:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:55.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.204 --rc genhtml_branch_coverage=1 00:11:55.204 --rc genhtml_function_coverage=1 00:11:55.204 --rc genhtml_legend=1 00:11:55.204 --rc geninfo_all_blocks=1 00:11:55.204 --rc geninfo_unexecuted_blocks=1 00:11:55.204 00:11:55.204 ' 00:11:55.204 11:39:28 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.204 11:39:28 -- nvmf/common.sh@7 -- # uname -s 00:11:55.204 11:39:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.204 11:39:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.204 11:39:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.204 11:39:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.204 11:39:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.204 11:39:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.204 11:39:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.204 11:39:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.204 11:39:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.204 11:39:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.204 11:39:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:11:55.204 11:39:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:11:55.204 11:39:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.204 11:39:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.204 11:39:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.204 11:39:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.204 11:39:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.204 11:39:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.204 11:39:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.204 11:39:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.204 11:39:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.204 11:39:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.204 11:39:28 -- paths/export.sh@5 -- # export PATH 00:11:55.204 11:39:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.204 11:39:28 -- nvmf/common.sh@46 -- # : 0 00:11:55.204 11:39:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:55.204 11:39:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:55.204 11:39:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:55.204 11:39:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.204 11:39:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.204 11:39:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:55.204 11:39:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:55.204 11:39:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:55.204 11:39:28 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:55.466 11:39:28 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:55.466 11:39:28 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:55.466 11:39:28 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:55.466 11:39:28 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:55.466 11:39:28 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:55.466 11:39:28 -- target/referrals.sh@37 -- # nvmftestinit 00:11:55.466 11:39:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:55.466 11:39:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.466 11:39:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:55.466 11:39:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:55.466 11:39:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:55.466 11:39:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.466 11:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.466 11:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.466 11:39:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:55.466 11:39:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:55.466 11:39:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:55.466 11:39:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:55.466 11:39:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:55.466 11:39:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:55.466 11:39:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.466 11:39:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.466 11:39:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:55.466 11:39:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:55.466 11:39:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.466 11:39:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.466 11:39:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.466 11:39:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.466 11:39:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.466 11:39:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.466 11:39:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.466 11:39:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.466 11:39:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:55.466 11:39:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:55.466 Cannot find device "nvmf_tgt_br" 00:11:55.466 11:39:28 -- nvmf/common.sh@154 -- # true 00:11:55.466 11:39:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.466 Cannot find device "nvmf_tgt_br2" 00:11:55.466 11:39:28 -- nvmf/common.sh@155 -- # true 00:11:55.466 11:39:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:55.466 11:39:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:55.466 Cannot find device "nvmf_tgt_br" 00:11:55.466 11:39:28 -- nvmf/common.sh@157 -- # true 00:11:55.466 11:39:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:55.466 Cannot find device "nvmf_tgt_br2" 00:11:55.466 11:39:28 -- nvmf/common.sh@158 -- # true 00:11:55.466 11:39:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:55.466 11:39:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:55.466 11:39:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.466 11:39:28 -- nvmf/common.sh@161 -- # true 00:11:55.466 11:39:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.466 11:39:28 -- nvmf/common.sh@162 -- # true 00:11:55.466 11:39:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.466 11:39:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.466 11:39:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.466 11:39:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.466 11:39:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.466 11:39:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.466 11:39:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.466 11:39:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.466 11:39:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.466 11:39:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:55.466 11:39:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:55.466 11:39:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:55.466 11:39:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:55.466 11:39:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.725 11:39:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.725 11:39:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.725 11:39:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:55.725 11:39:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:55.725 11:39:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.725 11:39:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.725 11:39:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.725 11:39:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.725 11:39:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.725 11:39:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:55.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:55.725 00:11:55.726 --- 10.0.0.2 ping statistics --- 00:11:55.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.726 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:55.726 11:39:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:55.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:11:55.726 00:11:55.726 --- 10.0.0.3 ping statistics --- 00:11:55.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.726 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:55.726 11:39:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:55.726 00:11:55.726 --- 10.0.0.1 ping statistics --- 00:11:55.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.726 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:55.726 11:39:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.726 11:39:28 -- nvmf/common.sh@421 -- # return 0 00:11:55.726 11:39:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:55.726 11:39:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.726 11:39:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:55.726 11:39:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:55.726 11:39:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.726 11:39:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:55.726 11:39:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:55.726 11:39:28 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:55.726 11:39:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:55.726 11:39:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.726 11:39:28 -- common/autotest_common.sh@10 -- # set +x 00:11:55.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.726 11:39:28 -- nvmf/common.sh@469 -- # nvmfpid=61909 00:11:55.726 11:39:28 -- nvmf/common.sh@470 -- # waitforlisten 61909 00:11:55.726 11:39:28 -- common/autotest_common.sh@829 -- # '[' -z 61909 ']' 00:11:55.726 11:39:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.726 11:39:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.726 11:39:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.726 11:39:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.726 11:39:28 -- common/autotest_common.sh@10 -- # set +x 00:11:55.726 11:39:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.726 [2024-11-20 11:39:28.672620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:55.726 [2024-11-20 11:39:28.672728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.986 [2024-11-20 11:39:28.815892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.986 [2024-11-20 11:39:28.923992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:55.986 [2024-11-20 11:39:28.924135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.986 [2024-11-20 11:39:28.924144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.986 [2024-11-20 11:39:28.924150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.986 [2024-11-20 11:39:28.924265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.986 [2024-11-20 11:39:28.924429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.986 [2024-11-20 11:39:28.924506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.986 [2024-11-20 11:39:28.924513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.925 11:39:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.925 11:39:29 -- common/autotest_common.sh@862 -- # return 0 00:11:56.925 11:39:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:56.925 11:39:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 11:39:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.925 11:39:29 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 [2024-11-20 11:39:29.804699] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.925 11:39:29 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 [2024-11-20 11:39:29.834944] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.925 11:39:29 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.925 11:39:29 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.925 11:39:29 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.925 11:39:29 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.925 11:39:29 -- target/referrals.sh@48 -- # jq length 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.925 11:39:29 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:56.925 11:39:29 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:56.925 11:39:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:56.925 11:39:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.925 11:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.925 11:39:29 -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 11:39:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:56.925 11:39:29 -- target/referrals.sh@21 -- # sort 00:11:56.925 11:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.185 11:39:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.185 11:39:29 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.185 11:39:29 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:57.185 11:39:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.185 11:39:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.185 11:39:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.185 11:39:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.185 11:39:29 -- target/referrals.sh@26 -- # sort 00:11:57.185 11:39:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.185 11:39:30 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.185 11:39:30 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:57.185 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.185 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.185 11:39:30 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:57.185 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.185 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.185 11:39:30 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:57.185 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.185 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.185 11:39:30 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.185 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.185 11:39:30 -- target/referrals.sh@56 -- # jq length 00:11:57.185 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.186 11:39:30 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:57.186 11:39:30 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:57.186 11:39:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.186 11:39:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.186 11:39:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.186 11:39:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.186 11:39:30 -- target/referrals.sh@26 -- # sort 00:11:57.446 11:39:30 -- target/referrals.sh@26 -- # echo 00:11:57.446 11:39:30 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:57.446 11:39:30 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:57.446 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.446 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.446 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.446 11:39:30 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.446 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.446 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.446 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.446 11:39:30 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:57.446 11:39:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.446 11:39:30 -- target/referrals.sh@21 -- # sort 00:11:57.446 11:39:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.446 11:39:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.446 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.446 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.446 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.446 11:39:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:57.446 11:39:30 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.446 11:39:30 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:57.446 11:39:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.446 11:39:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.446 11:39:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.446 11:39:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.446 11:39:30 -- target/referrals.sh@26 -- # sort 00:11:57.706 11:39:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:57.706 11:39:30 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.706 11:39:30 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:57.706 11:39:30 -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:57.706 11:39:30 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.706 11:39:30 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.706 11:39:30 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.706 11:39:30 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:57.706 11:39:30 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:57.706 11:39:30 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:57.706 11:39:30 -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:57.706 11:39:30 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.706 11:39:30 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:57.966 11:39:30 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:57.966 11:39:30 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.966 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.966 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.966 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.966 11:39:30 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:57.966 11:39:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.966 11:39:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.966 11:39:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.966 11:39:30 -- target/referrals.sh@21 -- # sort 00:11:57.966 11:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.966 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:57.966 11:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.966 11:39:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:57.966 11:39:30 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:57.966 11:39:30 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:57.966 11:39:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.966 11:39:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.966 11:39:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.966 11:39:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.966 11:39:30 -- target/referrals.sh@26 -- # sort 00:11:57.966 11:39:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:57.966 11:39:30 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:57.966 11:39:30 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:57.966 11:39:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.966 11:39:31 -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:57.966 11:39:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.966 11:39:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.227 11:39:31 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:58.227 11:39:31 -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:58.227 11:39:31 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:58.227 11:39:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:58.227 11:39:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.227 11:39:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.227 11:39:31 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.227 11:39:31 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:58.227 11:39:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.227 11:39:31 -- common/autotest_common.sh@10 -- # set +x 00:11:58.227 11:39:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.227 11:39:31 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.227 11:39:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.227 11:39:31 -- common/autotest_common.sh@10 -- # set +x 00:11:58.227 11:39:31 -- target/referrals.sh@82 -- # jq length 00:11:58.227 11:39:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.487 11:39:31 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:58.487 11:39:31 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:58.487 11:39:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.487 11:39:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.487 11:39:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.487 11:39:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.487 11:39:31 -- target/referrals.sh@26 -- # sort 00:11:58.487 11:39:31 -- target/referrals.sh@26 -- # echo 00:11:58.487 11:39:31 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:58.487 11:39:31 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:58.487 11:39:31 -- target/referrals.sh@86 -- # nvmftestfini 00:11:58.487 11:39:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:58.487 11:39:31 -- nvmf/common.sh@116 -- # sync 00:11:58.746 11:39:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:58.746 11:39:31 -- nvmf/common.sh@119 -- # set +e 00:11:58.746 11:39:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:58.746 11:39:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:58.746 rmmod nvme_tcp 00:11:58.746 rmmod nvme_fabrics 00:11:58.746 rmmod nvme_keyring 00:11:58.746 11:39:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:58.746 11:39:31 -- nvmf/common.sh@123 -- # set -e 00:11:58.746 11:39:31 -- nvmf/common.sh@124 -- # return 0 00:11:58.746 11:39:31 -- nvmf/common.sh@477 -- # '[' -n 61909 ']' 00:11:58.746 11:39:31 -- nvmf/common.sh@478 -- # killprocess 61909 00:11:58.746 11:39:31 -- common/autotest_common.sh@936 -- # '[' -z 61909 ']' 00:11:58.746 11:39:31 -- common/autotest_common.sh@940 -- # kill -0 61909 00:11:58.746 11:39:31 -- common/autotest_common.sh@941 -- # uname 00:11:58.746 11:39:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.746 11:39:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61909 00:11:58.746 killing process with pid 61909 00:11:58.746 11:39:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:58.746 11:39:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:58.746 11:39:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61909' 00:11:58.746 11:39:31 -- common/autotest_common.sh@955 -- # kill 61909 00:11:58.746 11:39:31 -- common/autotest_common.sh@960 -- # wait 61909 00:11:59.008 11:39:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:59.008 11:39:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:59.008 11:39:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:59.008 11:39:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.008 11:39:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:59.008 11:39:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.008 11:39:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.008 11:39:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.008 11:39:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:59.008 00:11:59.008 real 0m3.957s 00:11:59.008 user 0m12.931s 00:11:59.008 sys 0m1.081s 00:11:59.008 11:39:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:59.008 11:39:31 -- common/autotest_common.sh@10 -- # set +x 00:11:59.008 ************************************ 00:11:59.008 END TEST nvmf_referrals 00:11:59.008 ************************************ 00:11:59.008 11:39:31 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.008 11:39:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:59.008 11:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.008 11:39:31 -- common/autotest_common.sh@10 -- # set +x 00:11:59.008 ************************************ 00:11:59.008 START TEST nvmf_connect_disconnect 00:11:59.008 ************************************ 00:11:59.008 11:39:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.268 * Looking for test storage... 00:11:59.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.268 11:39:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:59.268 11:39:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:59.268 11:39:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:59.268 11:39:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:59.268 11:39:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:59.268 11:39:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:59.268 11:39:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:59.268 11:39:32 -- scripts/common.sh@335 -- # IFS=.-: 00:11:59.268 11:39:32 -- scripts/common.sh@335 -- # read -ra ver1 00:11:59.268 11:39:32 -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.269 11:39:32 -- scripts/common.sh@336 -- # read -ra ver2 00:11:59.269 11:39:32 -- scripts/common.sh@337 -- # local 'op=<' 00:11:59.269 11:39:32 -- scripts/common.sh@339 -- # ver1_l=2 00:11:59.269 11:39:32 -- scripts/common.sh@340 -- # ver2_l=1 00:11:59.269 11:39:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:59.269 11:39:32 -- scripts/common.sh@343 -- # case "$op" in 00:11:59.269 11:39:32 -- scripts/common.sh@344 -- # : 1 00:11:59.269 11:39:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:59.269 11:39:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.269 11:39:32 -- scripts/common.sh@364 -- # decimal 1 00:11:59.269 11:39:32 -- scripts/common.sh@352 -- # local d=1 00:11:59.269 11:39:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.269 11:39:32 -- scripts/common.sh@354 -- # echo 1 00:11:59.269 11:39:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:59.269 11:39:32 -- scripts/common.sh@365 -- # decimal 2 00:11:59.269 11:39:32 -- scripts/common.sh@352 -- # local d=2 00:11:59.269 11:39:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.269 11:39:32 -- scripts/common.sh@354 -- # echo 2 00:11:59.269 11:39:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:59.269 11:39:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:59.269 11:39:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:59.269 11:39:32 -- scripts/common.sh@367 -- # return 0 00:11:59.269 11:39:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.269 11:39:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 11:39:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 11:39:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 11:39:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 11:39:32 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.269 11:39:32 -- nvmf/common.sh@7 -- # uname -s 00:11:59.269 11:39:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.269 11:39:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.269 11:39:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.269 11:39:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.269 11:39:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.269 11:39:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.269 11:39:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.269 11:39:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.269 11:39:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.269 11:39:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.269 11:39:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:11:59.269 11:39:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:11:59.269 11:39:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.269 11:39:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.269 11:39:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.269 11:39:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.269 11:39:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.269 11:39:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.269 11:39:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.269 11:39:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.269 11:39:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.269 11:39:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.269 11:39:32 -- paths/export.sh@5 -- # export PATH 00:11:59.269 11:39:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.269 11:39:32 -- nvmf/common.sh@46 -- # : 0 00:11:59.269 11:39:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:59.269 11:39:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:59.269 11:39:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:59.269 11:39:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.269 11:39:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.269 11:39:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:59.269 11:39:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:59.269 11:39:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:59.269 11:39:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.269 11:39:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.269 11:39:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:59.269 11:39:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:59.269 11:39:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.269 11:39:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:59.269 11:39:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:59.269 11:39:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:59.269 11:39:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.269 11:39:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.269 11:39:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.269 11:39:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:59.269 11:39:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:59.269 11:39:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:59.269 11:39:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:59.269 11:39:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:59.269 11:39:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:59.269 11:39:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.270 11:39:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.270 11:39:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:59.270 11:39:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:59.270 11:39:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.270 11:39:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.270 11:39:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.270 11:39:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.270 11:39:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.270 11:39:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.270 11:39:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.270 11:39:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.270 11:39:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:59.270 11:39:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:59.530 Cannot find device "nvmf_tgt_br" 00:11:59.530 11:39:32 -- nvmf/common.sh@154 -- # true 00:11:59.530 11:39:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.530 Cannot find device "nvmf_tgt_br2" 00:11:59.530 11:39:32 -- nvmf/common.sh@155 -- # true 00:11:59.530 11:39:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:59.530 11:39:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:59.530 Cannot find device "nvmf_tgt_br" 00:11:59.530 11:39:32 -- nvmf/common.sh@157 -- # true 00:11:59.530 11:39:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:59.530 Cannot find device "nvmf_tgt_br2" 00:11:59.530 11:39:32 -- nvmf/common.sh@158 -- # true 00:11:59.530 11:39:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:59.530 11:39:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:59.530 11:39:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.530 11:39:32 -- nvmf/common.sh@161 -- # true 00:11:59.530 11:39:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.530 11:39:32 -- nvmf/common.sh@162 -- # true 00:11:59.530 11:39:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.530 11:39:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.530 11:39:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.530 11:39:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.530 11:39:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.530 11:39:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.530 11:39:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.530 11:39:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:59.530 11:39:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:59.530 11:39:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:59.530 11:39:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:59.530 11:39:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:59.530 11:39:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:59.530 11:39:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.530 11:39:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.530 11:39:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.530 11:39:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:59.530 11:39:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:59.530 11:39:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.530 11:39:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.530 11:39:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.789 11:39:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.789 11:39:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.789 11:39:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:59.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:11:59.789 00:11:59.789 --- 10.0.0.2 ping statistics --- 00:11:59.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.789 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:59.789 11:39:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:59.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:59.789 00:11:59.789 --- 10.0.0.3 ping statistics --- 00:11:59.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.789 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:59.789 11:39:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:59.789 00:11:59.789 --- 10.0.0.1 ping statistics --- 00:11:59.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.789 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:59.789 11:39:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.789 11:39:32 -- nvmf/common.sh@421 -- # return 0 00:11:59.789 11:39:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:59.789 11:39:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.789 11:39:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:59.789 11:39:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:59.789 11:39:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.789 11:39:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:59.789 11:39:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:59.789 11:39:32 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:59.790 11:39:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:59.790 11:39:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.790 11:39:32 -- common/autotest_common.sh@10 -- # set +x 00:11:59.790 11:39:32 -- nvmf/common.sh@469 -- # nvmfpid=62234 00:11:59.790 11:39:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.790 11:39:32 -- nvmf/common.sh@470 -- # waitforlisten 62234 00:11:59.790 11:39:32 -- common/autotest_common.sh@829 -- # '[' -z 62234 ']' 00:11:59.790 11:39:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.790 11:39:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.790 11:39:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.790 11:39:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.790 11:39:32 -- common/autotest_common.sh@10 -- # set +x 00:11:59.790 [2024-11-20 11:39:32.709118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:59.790 [2024-11-20 11:39:32.709220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.050 [2024-11-20 11:39:32.854134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.050 [2024-11-20 11:39:32.962826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:00.050 [2024-11-20 11:39:32.963065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.050 [2024-11-20 11:39:32.963104] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.050 [2024-11-20 11:39:32.963148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.050 [2024-11-20 11:39:32.963342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.050 [2024-11-20 11:39:32.963493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.050 [2024-11-20 11:39:32.963621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.050 [2024-11-20 11:39:32.963630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.986 11:39:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.986 11:39:33 -- common/autotest_common.sh@862 -- # return 0 00:12:00.986 11:39:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:00.986 11:39:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.986 11:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.986 11:39:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.986 11:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.986 11:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.986 [2024-11-20 11:39:33.728916] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.986 11:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:00.986 11:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.986 11:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.986 11:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.986 11:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.986 11:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.986 11:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:00.986 11:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.986 11:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.986 11:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.986 11:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.986 11:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.986 [2024-11-20 11:39:33.808473] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.986 11:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:00.986 11:39:33 -- target/connect_disconnect.sh@34 -- # set +x 00:12:03.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.446 11:43:18 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:45.446 11:43:18 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:45.446 11:43:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.446 11:43:18 -- nvmf/common.sh@116 -- # sync 00:15:45.446 11:43:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.446 11:43:18 -- nvmf/common.sh@119 -- # set +e 00:15:45.446 11:43:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.446 11:43:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.446 rmmod nvme_tcp 00:15:45.446 rmmod nvme_fabrics 00:15:45.446 rmmod nvme_keyring 00:15:45.446 11:43:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:45.446 11:43:18 -- nvmf/common.sh@123 -- # set -e 00:15:45.446 11:43:18 -- nvmf/common.sh@124 -- # return 0 00:15:45.446 11:43:18 -- nvmf/common.sh@477 -- # '[' -n 62234 ']' 00:15:45.446 11:43:18 -- nvmf/common.sh@478 -- # killprocess 62234 00:15:45.446 11:43:18 -- common/autotest_common.sh@936 -- # '[' -z 62234 ']' 00:15:45.446 11:43:18 -- common/autotest_common.sh@940 -- # kill -0 62234 00:15:45.446 11:43:18 -- common/autotest_common.sh@941 -- # uname 00:15:45.446 11:43:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.446 11:43:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62234 00:15:45.704 killing process with pid 62234 00:15:45.704 11:43:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.704 11:43:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.704 11:43:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62234' 00:15:45.704 11:43:18 -- common/autotest_common.sh@955 -- # kill 62234 00:15:45.704 11:43:18 -- common/autotest_common.sh@960 -- # wait 62234 00:15:45.704 11:43:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.704 11:43:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.704 11:43:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.704 11:43:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.705 11:43:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.705 11:43:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.705 11:43:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.705 11:43:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.963 11:43:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:45.963 00:15:45.963 real 3m46.775s 00:15:45.963 user 14m49.042s 00:15:45.963 sys 0m20.456s 00:15:45.963 11:43:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:45.963 ************************************ 00:15:45.963 END TEST nvmf_connect_disconnect 00:15:45.963 ************************************ 00:15:45.963 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:15:45.963 11:43:18 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:45.963 11:43:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:45.963 11:43:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:45.963 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:15:45.963 ************************************ 00:15:45.963 START TEST nvmf_multitarget 00:15:45.963 ************************************ 00:15:45.963 11:43:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:45.963 * Looking for test storage... 00:15:45.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.963 11:43:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:45.963 11:43:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:45.963 11:43:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:46.222 11:43:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:46.222 11:43:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:46.223 11:43:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:46.223 11:43:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:46.223 11:43:19 -- scripts/common.sh@335 -- # IFS=.-: 00:15:46.223 11:43:19 -- scripts/common.sh@335 -- # read -ra ver1 00:15:46.223 11:43:19 -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.223 11:43:19 -- scripts/common.sh@336 -- # read -ra ver2 00:15:46.223 11:43:19 -- scripts/common.sh@337 -- # local 'op=<' 00:15:46.223 11:43:19 -- scripts/common.sh@339 -- # ver1_l=2 00:15:46.223 11:43:19 -- scripts/common.sh@340 -- # ver2_l=1 00:15:46.223 11:43:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:46.223 11:43:19 -- scripts/common.sh@343 -- # case "$op" in 00:15:46.223 11:43:19 -- scripts/common.sh@344 -- # : 1 00:15:46.223 11:43:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:46.223 11:43:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.223 11:43:19 -- scripts/common.sh@364 -- # decimal 1 00:15:46.223 11:43:19 -- scripts/common.sh@352 -- # local d=1 00:15:46.223 11:43:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.223 11:43:19 -- scripts/common.sh@354 -- # echo 1 00:15:46.223 11:43:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:46.223 11:43:19 -- scripts/common.sh@365 -- # decimal 2 00:15:46.223 11:43:19 -- scripts/common.sh@352 -- # local d=2 00:15:46.223 11:43:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.223 11:43:19 -- scripts/common.sh@354 -- # echo 2 00:15:46.223 11:43:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:46.223 11:43:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:46.223 11:43:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:46.223 11:43:19 -- scripts/common.sh@367 -- # return 0 00:15:46.223 11:43:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.223 11:43:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.223 --rc genhtml_branch_coverage=1 00:15:46.223 --rc genhtml_function_coverage=1 00:15:46.223 --rc genhtml_legend=1 00:15:46.223 --rc geninfo_all_blocks=1 00:15:46.223 --rc geninfo_unexecuted_blocks=1 00:15:46.223 00:15:46.223 ' 00:15:46.223 11:43:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.223 --rc genhtml_branch_coverage=1 00:15:46.223 --rc genhtml_function_coverage=1 00:15:46.223 --rc genhtml_legend=1 00:15:46.223 --rc geninfo_all_blocks=1 00:15:46.223 --rc geninfo_unexecuted_blocks=1 00:15:46.223 00:15:46.223 ' 00:15:46.223 11:43:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.223 --rc genhtml_branch_coverage=1 00:15:46.223 --rc genhtml_function_coverage=1 00:15:46.223 --rc genhtml_legend=1 00:15:46.223 --rc geninfo_all_blocks=1 00:15:46.223 --rc geninfo_unexecuted_blocks=1 00:15:46.223 00:15:46.223 ' 00:15:46.223 11:43:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.223 --rc genhtml_branch_coverage=1 00:15:46.223 --rc genhtml_function_coverage=1 00:15:46.223 --rc genhtml_legend=1 00:15:46.223 --rc geninfo_all_blocks=1 00:15:46.223 --rc geninfo_unexecuted_blocks=1 00:15:46.223 00:15:46.223 ' 00:15:46.223 11:43:19 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.223 11:43:19 -- nvmf/common.sh@7 -- # uname -s 00:15:46.223 11:43:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.223 11:43:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.223 11:43:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.223 11:43:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.223 11:43:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.223 11:43:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.223 11:43:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.223 11:43:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.223 11:43:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.223 11:43:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.223 11:43:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:15:46.223 11:43:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:15:46.223 11:43:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.223 11:43:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.223 11:43:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.223 11:43:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.223 11:43:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.223 11:43:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.223 11:43:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.223 11:43:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.223 11:43:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.223 11:43:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.223 11:43:19 -- paths/export.sh@5 -- # export PATH 00:15:46.223 11:43:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.223 11:43:19 -- nvmf/common.sh@46 -- # : 0 00:15:46.223 11:43:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:46.223 11:43:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:46.223 11:43:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:46.223 11:43:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.223 11:43:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.223 11:43:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:46.223 11:43:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:46.223 11:43:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:46.223 11:43:19 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:15:46.223 11:43:19 -- target/multitarget.sh@15 -- # nvmftestinit 00:15:46.223 11:43:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:46.223 11:43:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.223 11:43:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:46.223 11:43:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:46.223 11:43:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:46.223 11:43:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.223 11:43:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.223 11:43:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.223 11:43:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:46.223 11:43:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:46.223 11:43:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:46.223 11:43:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:46.223 11:43:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:46.223 11:43:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:46.223 11:43:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.223 11:43:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.223 11:43:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.223 11:43:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:46.223 11:43:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.223 11:43:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.223 11:43:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.223 11:43:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.223 11:43:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.223 11:43:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.223 11:43:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.223 11:43:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.223 11:43:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:46.224 11:43:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:46.224 Cannot find device "nvmf_tgt_br" 00:15:46.224 11:43:19 -- nvmf/common.sh@154 -- # true 00:15:46.224 11:43:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.224 Cannot find device "nvmf_tgt_br2" 00:15:46.224 11:43:19 -- nvmf/common.sh@155 -- # true 00:15:46.224 11:43:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:46.224 11:43:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:46.224 Cannot find device "nvmf_tgt_br" 00:15:46.224 11:43:19 -- nvmf/common.sh@157 -- # true 00:15:46.224 11:43:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:46.224 Cannot find device "nvmf_tgt_br2" 00:15:46.224 11:43:19 -- nvmf/common.sh@158 -- # true 00:15:46.224 11:43:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:46.224 11:43:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:46.224 11:43:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.224 11:43:19 -- nvmf/common.sh@161 -- # true 00:15:46.224 11:43:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.482 11:43:19 -- nvmf/common.sh@162 -- # true 00:15:46.482 11:43:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.482 11:43:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.482 11:43:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.482 11:43:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.482 11:43:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.482 11:43:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.482 11:43:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.482 11:43:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.482 11:43:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.482 11:43:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:46.482 11:43:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:46.482 11:43:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:46.482 11:43:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:46.482 11:43:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.482 11:43:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.482 11:43:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.482 11:43:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:46.482 11:43:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:46.482 11:43:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.482 11:43:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.482 11:43:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.482 11:43:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.483 11:43:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.483 11:43:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:46.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:46.483 00:15:46.483 --- 10.0.0.2 ping statistics --- 00:15:46.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.483 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:46.483 11:43:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:46.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:46.483 00:15:46.483 --- 10.0.0.3 ping statistics --- 00:15:46.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.483 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:46.483 11:43:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:46.483 00:15:46.483 --- 10.0.0.1 ping statistics --- 00:15:46.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.483 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:46.483 11:43:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.483 11:43:19 -- nvmf/common.sh@421 -- # return 0 00:15:46.483 11:43:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:46.483 11:43:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.483 11:43:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:46.483 11:43:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:46.483 11:43:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.483 11:43:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:46.483 11:43:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:46.483 11:43:19 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:46.483 11:43:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:46.483 11:43:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.483 11:43:19 -- common/autotest_common.sh@10 -- # set +x 00:15:46.483 11:43:19 -- nvmf/common.sh@469 -- # nvmfpid=66016 00:15:46.483 11:43:19 -- nvmf/common.sh@470 -- # waitforlisten 66016 00:15:46.483 11:43:19 -- common/autotest_common.sh@829 -- # '[' -z 66016 ']' 00:15:46.483 11:43:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.483 11:43:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.483 11:43:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.483 11:43:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.483 11:43:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.483 11:43:19 -- common/autotest_common.sh@10 -- # set +x 00:15:46.483 [2024-11-20 11:43:19.520350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:46.483 [2024-11-20 11:43:19.520421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.741 [2024-11-20 11:43:19.658162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.741 [2024-11-20 11:43:19.755143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.741 [2024-11-20 11:43:19.755281] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.741 [2024-11-20 11:43:19.755289] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.741 [2024-11-20 11:43:19.755295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.741 [2024-11-20 11:43:19.755408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.741 [2024-11-20 11:43:19.755593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.741 [2024-11-20 11:43:19.755798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.741 [2024-11-20 11:43:19.755799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.676 11:43:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.676 11:43:20 -- common/autotest_common.sh@862 -- # return 0 00:15:47.676 11:43:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.676 11:43:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.676 11:43:20 -- common/autotest_common.sh@10 -- # set +x 00:15:47.676 11:43:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.677 11:43:20 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:47.677 11:43:20 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:47.677 11:43:20 -- target/multitarget.sh@21 -- # jq length 00:15:47.677 11:43:20 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:47.677 11:43:20 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:47.677 "nvmf_tgt_1" 00:15:47.677 11:43:20 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:47.935 "nvmf_tgt_2" 00:15:47.935 11:43:20 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:47.935 11:43:20 -- target/multitarget.sh@28 -- # jq length 00:15:47.935 11:43:20 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:47.935 11:43:20 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:47.935 true 00:15:47.935 11:43:20 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:48.194 true 00:15:48.194 11:43:21 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:48.194 11:43:21 -- target/multitarget.sh@35 -- # jq length 00:15:48.194 11:43:21 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:48.194 11:43:21 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:48.194 11:43:21 -- target/multitarget.sh@41 -- # nvmftestfini 00:15:48.194 11:43:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:48.194 11:43:21 -- nvmf/common.sh@116 -- # sync 00:15:48.194 11:43:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:48.195 11:43:21 -- nvmf/common.sh@119 -- # set +e 00:15:48.195 11:43:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:48.195 11:43:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:48.195 rmmod nvme_tcp 00:15:48.195 rmmod nvme_fabrics 00:15:48.455 rmmod nvme_keyring 00:15:48.455 11:43:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:48.455 11:43:21 -- nvmf/common.sh@123 -- # set -e 00:15:48.455 11:43:21 -- nvmf/common.sh@124 -- # return 0 00:15:48.455 11:43:21 -- nvmf/common.sh@477 -- # '[' -n 66016 ']' 00:15:48.455 11:43:21 -- nvmf/common.sh@478 -- # killprocess 66016 00:15:48.455 11:43:21 -- common/autotest_common.sh@936 -- # '[' -z 66016 ']' 00:15:48.455 11:43:21 -- common/autotest_common.sh@940 -- # kill -0 66016 00:15:48.455 11:43:21 -- common/autotest_common.sh@941 -- # uname 00:15:48.455 11:43:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.455 11:43:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66016 00:15:48.455 11:43:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.455 11:43:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.455 11:43:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66016' 00:15:48.455 killing process with pid 66016 00:15:48.455 11:43:21 -- common/autotest_common.sh@955 -- # kill 66016 00:15:48.455 11:43:21 -- common/autotest_common.sh@960 -- # wait 66016 00:15:48.714 11:43:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:48.714 11:43:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:48.715 11:43:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:48.715 11:43:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.715 11:43:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:48.715 11:43:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.715 11:43:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.715 11:43:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.715 11:43:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:48.715 ************************************ 00:15:48.715 END TEST nvmf_multitarget 00:15:48.715 ************************************ 00:15:48.715 00:15:48.715 real 0m2.735s 00:15:48.715 user 0m8.071s 00:15:48.715 sys 0m0.767s 00:15:48.715 11:43:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.715 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:15:48.715 11:43:21 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:48.715 11:43:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.715 11:43:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.715 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:15:48.715 ************************************ 00:15:48.715 START TEST nvmf_rpc 00:15:48.715 ************************************ 00:15:48.715 11:43:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:48.715 * Looking for test storage... 00:15:48.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.975 11:43:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:48.975 11:43:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:48.975 11:43:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:48.975 11:43:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:48.975 11:43:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:48.975 11:43:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:48.975 11:43:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:48.975 11:43:21 -- scripts/common.sh@335 -- # IFS=.-: 00:15:48.975 11:43:21 -- scripts/common.sh@335 -- # read -ra ver1 00:15:48.975 11:43:21 -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.975 11:43:21 -- scripts/common.sh@336 -- # read -ra ver2 00:15:48.975 11:43:21 -- scripts/common.sh@337 -- # local 'op=<' 00:15:48.975 11:43:21 -- scripts/common.sh@339 -- # ver1_l=2 00:15:48.975 11:43:21 -- scripts/common.sh@340 -- # ver2_l=1 00:15:48.975 11:43:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:48.975 11:43:21 -- scripts/common.sh@343 -- # case "$op" in 00:15:48.975 11:43:21 -- scripts/common.sh@344 -- # : 1 00:15:48.975 11:43:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:48.976 11:43:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.976 11:43:21 -- scripts/common.sh@364 -- # decimal 1 00:15:48.976 11:43:21 -- scripts/common.sh@352 -- # local d=1 00:15:48.976 11:43:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.976 11:43:21 -- scripts/common.sh@354 -- # echo 1 00:15:48.976 11:43:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:48.976 11:43:21 -- scripts/common.sh@365 -- # decimal 2 00:15:48.976 11:43:21 -- scripts/common.sh@352 -- # local d=2 00:15:48.976 11:43:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.976 11:43:21 -- scripts/common.sh@354 -- # echo 2 00:15:48.976 11:43:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:48.976 11:43:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:48.976 11:43:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:48.976 11:43:21 -- scripts/common.sh@367 -- # return 0 00:15:48.976 11:43:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.976 11:43:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.976 --rc genhtml_branch_coverage=1 00:15:48.976 --rc genhtml_function_coverage=1 00:15:48.976 --rc genhtml_legend=1 00:15:48.976 --rc geninfo_all_blocks=1 00:15:48.976 --rc geninfo_unexecuted_blocks=1 00:15:48.976 00:15:48.976 ' 00:15:48.976 11:43:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.976 --rc genhtml_branch_coverage=1 00:15:48.976 --rc genhtml_function_coverage=1 00:15:48.976 --rc genhtml_legend=1 00:15:48.976 --rc geninfo_all_blocks=1 00:15:48.976 --rc geninfo_unexecuted_blocks=1 00:15:48.976 00:15:48.976 ' 00:15:48.976 11:43:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.976 --rc genhtml_branch_coverage=1 00:15:48.976 --rc genhtml_function_coverage=1 00:15:48.976 --rc genhtml_legend=1 00:15:48.976 --rc geninfo_all_blocks=1 00:15:48.976 --rc geninfo_unexecuted_blocks=1 00:15:48.976 00:15:48.976 ' 00:15:48.976 11:43:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.976 --rc genhtml_branch_coverage=1 00:15:48.976 --rc genhtml_function_coverage=1 00:15:48.976 --rc genhtml_legend=1 00:15:48.976 --rc geninfo_all_blocks=1 00:15:48.976 --rc geninfo_unexecuted_blocks=1 00:15:48.976 00:15:48.976 ' 00:15:48.976 11:43:21 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.976 11:43:21 -- nvmf/common.sh@7 -- # uname -s 00:15:48.976 11:43:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.976 11:43:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.976 11:43:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.976 11:43:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.976 11:43:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.976 11:43:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.976 11:43:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.976 11:43:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.976 11:43:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.976 11:43:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.976 11:43:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:15:48.976 11:43:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:15:48.976 11:43:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.976 11:43:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.976 11:43:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.976 11:43:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.976 11:43:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.976 11:43:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.976 11:43:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.976 11:43:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.976 11:43:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.976 11:43:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.976 11:43:21 -- paths/export.sh@5 -- # export PATH 00:15:48.976 11:43:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.976 11:43:21 -- nvmf/common.sh@46 -- # : 0 00:15:48.976 11:43:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:48.976 11:43:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:48.976 11:43:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:48.976 11:43:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.976 11:43:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.976 11:43:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:48.976 11:43:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:48.976 11:43:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:48.976 11:43:21 -- target/rpc.sh@11 -- # loops=5 00:15:48.976 11:43:21 -- target/rpc.sh@23 -- # nvmftestinit 00:15:48.976 11:43:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:48.976 11:43:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.976 11:43:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:48.976 11:43:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:48.976 11:43:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:48.976 11:43:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.976 11:43:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.976 11:43:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.976 11:43:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:48.976 11:43:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:48.976 11:43:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:48.976 11:43:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:48.976 11:43:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:48.977 11:43:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:48.977 11:43:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.977 11:43:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.977 11:43:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.977 11:43:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:48.977 11:43:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.977 11:43:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.977 11:43:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.977 11:43:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.977 11:43:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.977 11:43:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.977 11:43:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.977 11:43:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.977 11:43:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:48.977 11:43:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:48.977 Cannot find device "nvmf_tgt_br" 00:15:48.977 11:43:21 -- nvmf/common.sh@154 -- # true 00:15:48.977 11:43:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.977 Cannot find device "nvmf_tgt_br2" 00:15:48.977 11:43:21 -- nvmf/common.sh@155 -- # true 00:15:48.977 11:43:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:48.977 11:43:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:48.977 Cannot find device "nvmf_tgt_br" 00:15:48.977 11:43:21 -- nvmf/common.sh@157 -- # true 00:15:48.977 11:43:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:48.977 Cannot find device "nvmf_tgt_br2" 00:15:48.977 11:43:21 -- nvmf/common.sh@158 -- # true 00:15:48.977 11:43:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:49.237 11:43:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:49.237 11:43:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.237 11:43:22 -- nvmf/common.sh@161 -- # true 00:15:49.237 11:43:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.237 11:43:22 -- nvmf/common.sh@162 -- # true 00:15:49.237 11:43:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.237 11:43:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.237 11:43:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.237 11:43:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.237 11:43:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.237 11:43:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.237 11:43:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.237 11:43:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:49.237 11:43:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:49.237 11:43:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:49.237 11:43:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:49.237 11:43:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:49.237 11:43:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:49.237 11:43:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.237 11:43:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.237 11:43:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.237 11:43:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:49.237 11:43:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:49.237 11:43:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.237 11:43:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.237 11:43:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.237 11:43:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.237 11:43:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.237 11:43:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:49.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:49.237 00:15:49.237 --- 10.0.0.2 ping statistics --- 00:15:49.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.237 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:49.237 11:43:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:49.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:49.237 00:15:49.237 --- 10.0.0.3 ping statistics --- 00:15:49.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.237 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:49.237 11:43:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:15:49.237 00:15:49.237 --- 10.0.0.1 ping statistics --- 00:15:49.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.237 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:15:49.237 11:43:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.237 11:43:22 -- nvmf/common.sh@421 -- # return 0 00:15:49.237 11:43:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:49.237 11:43:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.237 11:43:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:49.237 11:43:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:49.237 11:43:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.237 11:43:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:49.237 11:43:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:49.237 11:43:22 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:49.237 11:43:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:49.237 11:43:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.237 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.237 11:43:22 -- nvmf/common.sh@469 -- # nvmfpid=66256 00:15:49.238 11:43:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:49.238 11:43:22 -- nvmf/common.sh@470 -- # waitforlisten 66256 00:15:49.238 11:43:22 -- common/autotest_common.sh@829 -- # '[' -z 66256 ']' 00:15:49.238 11:43:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.238 11:43:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.238 11:43:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.238 11:43:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.238 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:15:49.497 [2024-11-20 11:43:22.302902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:49.497 [2024-11-20 11:43:22.302968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.497 [2024-11-20 11:43:22.440522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.497 [2024-11-20 11:43:22.537678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:49.497 [2024-11-20 11:43:22.537820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.497 [2024-11-20 11:43:22.537827] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.497 [2024-11-20 11:43:22.537832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.497 [2024-11-20 11:43:22.537957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.497 [2024-11-20 11:43:22.538858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.758 [2024-11-20 11:43:22.538954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.758 [2024-11-20 11:43:22.538950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.328 11:43:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.328 11:43:23 -- common/autotest_common.sh@862 -- # return 0 00:15:50.328 11:43:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:50.328 11:43:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.328 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.328 11:43:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.328 11:43:23 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:50.328 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.328 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.328 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.328 11:43:23 -- target/rpc.sh@26 -- # stats='{ 00:15:50.328 "poll_groups": [ 00:15:50.328 { 00:15:50.328 "admin_qpairs": 0, 00:15:50.328 "completed_nvme_io": 0, 00:15:50.328 "current_admin_qpairs": 0, 00:15:50.328 "current_io_qpairs": 0, 00:15:50.328 "io_qpairs": 0, 00:15:50.328 "name": "nvmf_tgt_poll_group_0", 00:15:50.328 "pending_bdev_io": 0, 00:15:50.328 "transports": [] 00:15:50.328 }, 00:15:50.328 { 00:15:50.328 "admin_qpairs": 0, 00:15:50.328 "completed_nvme_io": 0, 00:15:50.328 "current_admin_qpairs": 0, 00:15:50.328 "current_io_qpairs": 0, 00:15:50.328 "io_qpairs": 0, 00:15:50.328 "name": "nvmf_tgt_poll_group_1", 00:15:50.328 "pending_bdev_io": 0, 00:15:50.328 "transports": [] 00:15:50.328 }, 00:15:50.328 { 00:15:50.328 "admin_qpairs": 0, 00:15:50.328 "completed_nvme_io": 0, 00:15:50.328 "current_admin_qpairs": 0, 00:15:50.328 "current_io_qpairs": 0, 00:15:50.328 "io_qpairs": 0, 00:15:50.328 "name": "nvmf_tgt_poll_group_2", 00:15:50.328 "pending_bdev_io": 0, 00:15:50.328 "transports": [] 00:15:50.328 }, 00:15:50.328 { 00:15:50.328 "admin_qpairs": 0, 00:15:50.328 "completed_nvme_io": 0, 00:15:50.328 "current_admin_qpairs": 0, 00:15:50.328 "current_io_qpairs": 0, 00:15:50.328 "io_qpairs": 0, 00:15:50.328 "name": "nvmf_tgt_poll_group_3", 00:15:50.328 "pending_bdev_io": 0, 00:15:50.328 "transports": [] 00:15:50.328 } 00:15:50.328 ], 00:15:50.328 "tick_rate": 2290000000 00:15:50.328 }' 00:15:50.328 11:43:23 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:50.328 11:43:23 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:50.328 11:43:23 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:50.328 11:43:23 -- target/rpc.sh@15 -- # wc -l 00:15:50.328 11:43:23 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:50.328 11:43:23 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:50.328 11:43:23 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:50.328 11:43:23 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.328 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.328 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 [2024-11-20 11:43:23.372946] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:50.588 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.588 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@33 -- # stats='{ 00:15:50.588 "poll_groups": [ 00:15:50.588 { 00:15:50.588 "admin_qpairs": 0, 00:15:50.588 "completed_nvme_io": 0, 00:15:50.588 "current_admin_qpairs": 0, 00:15:50.588 "current_io_qpairs": 0, 00:15:50.588 "io_qpairs": 0, 00:15:50.588 "name": "nvmf_tgt_poll_group_0", 00:15:50.588 "pending_bdev_io": 0, 00:15:50.588 "transports": [ 00:15:50.588 { 00:15:50.588 "trtype": "TCP" 00:15:50.588 } 00:15:50.588 ] 00:15:50.588 }, 00:15:50.588 { 00:15:50.588 "admin_qpairs": 0, 00:15:50.588 "completed_nvme_io": 0, 00:15:50.588 "current_admin_qpairs": 0, 00:15:50.588 "current_io_qpairs": 0, 00:15:50.588 "io_qpairs": 0, 00:15:50.588 "name": "nvmf_tgt_poll_group_1", 00:15:50.588 "pending_bdev_io": 0, 00:15:50.588 "transports": [ 00:15:50.588 { 00:15:50.588 "trtype": "TCP" 00:15:50.588 } 00:15:50.588 ] 00:15:50.588 }, 00:15:50.588 { 00:15:50.588 "admin_qpairs": 0, 00:15:50.588 "completed_nvme_io": 0, 00:15:50.588 "current_admin_qpairs": 0, 00:15:50.588 "current_io_qpairs": 0, 00:15:50.588 "io_qpairs": 0, 00:15:50.588 "name": "nvmf_tgt_poll_group_2", 00:15:50.588 "pending_bdev_io": 0, 00:15:50.588 "transports": [ 00:15:50.588 { 00:15:50.588 "trtype": "TCP" 00:15:50.588 } 00:15:50.588 ] 00:15:50.588 }, 00:15:50.588 { 00:15:50.588 "admin_qpairs": 0, 00:15:50.588 "completed_nvme_io": 0, 00:15:50.588 "current_admin_qpairs": 0, 00:15:50.588 "current_io_qpairs": 0, 00:15:50.588 "io_qpairs": 0, 00:15:50.588 "name": "nvmf_tgt_poll_group_3", 00:15:50.588 "pending_bdev_io": 0, 00:15:50.588 "transports": [ 00:15:50.588 { 00:15:50.588 "trtype": "TCP" 00:15:50.588 } 00:15:50.588 ] 00:15:50.588 } 00:15:50.588 ], 00:15:50.588 "tick_rate": 2290000000 00:15:50.588 }' 00:15:50.588 11:43:23 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:50.588 11:43:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:50.588 11:43:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:50.588 11:43:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:50.588 11:43:23 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:50.588 11:43:23 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:50.588 11:43:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:50.588 11:43:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:50.588 11:43:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:50.588 11:43:23 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:50.588 11:43:23 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:50.588 11:43:23 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:50.588 11:43:23 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:50.588 11:43:23 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:50.588 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.588 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 Malloc1 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:50.588 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.588 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:50.588 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.588 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:50.588 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.588 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.588 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.588 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.588 [2024-11-20 11:43:23.587125] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.588 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.588 11:43:23 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a -a 10.0.0.2 -s 4420 00:15:50.588 11:43:23 -- common/autotest_common.sh@650 -- # local es=0 00:15:50.588 11:43:23 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a -a 10.0.0.2 -s 4420 00:15:50.588 11:43:23 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:50.588 11:43:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.588 11:43:23 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:50.589 11:43:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.589 11:43:23 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:50.589 11:43:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.589 11:43:23 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:50.589 11:43:23 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:50.589 11:43:23 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a -a 10.0.0.2 -s 4420 00:15:50.589 [2024-11-20 11:43:23.623427] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a' 00:15:50.849 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:50.849 could not add new controller: failed to write to nvme-fabrics device 00:15:50.849 11:43:23 -- common/autotest_common.sh@653 -- # es=1 00:15:50.849 11:43:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.849 11:43:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.849 11:43:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.849 11:43:23 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:15:50.849 11:43:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.849 11:43:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.849 11:43:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.849 11:43:23 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:50.849 11:43:23 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:50.849 11:43:23 -- common/autotest_common.sh@1187 -- # local i=0 00:15:50.849 11:43:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:50.849 11:43:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:50.849 11:43:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:53.385 11:43:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:53.385 11:43:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:53.385 11:43:25 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.385 11:43:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:53.385 11:43:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.385 11:43:25 -- common/autotest_common.sh@1197 -- # return 0 00:15:53.385 11:43:25 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.385 11:43:26 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:53.385 11:43:26 -- common/autotest_common.sh@1208 -- # local i=0 00:15:53.385 11:43:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:53.385 11:43:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.385 11:43:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:53.385 11:43:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.385 11:43:26 -- common/autotest_common.sh@1220 -- # return 0 00:15:53.385 11:43:26 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:15:53.385 11:43:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.385 11:43:26 -- common/autotest_common.sh@10 -- # set +x 00:15:53.385 11:43:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.385 11:43:26 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.385 11:43:26 -- common/autotest_common.sh@650 -- # local es=0 00:15:53.385 11:43:26 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.385 11:43:26 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:53.385 11:43:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.385 11:43:26 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:53.385 11:43:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.385 11:43:26 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:53.385 11:43:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.385 11:43:26 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:53.385 11:43:26 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:53.385 11:43:26 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.385 [2024-11-20 11:43:26.091681] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a' 00:15:53.385 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:53.385 could not add new controller: failed to write to nvme-fabrics device 00:15:53.385 11:43:26 -- common/autotest_common.sh@653 -- # es=1 00:15:53.385 11:43:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:53.385 11:43:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:53.385 11:43:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:53.385 11:43:26 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:53.385 11:43:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.385 11:43:26 -- common/autotest_common.sh@10 -- # set +x 00:15:53.385 11:43:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.385 11:43:26 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.385 11:43:26 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.385 11:43:26 -- common/autotest_common.sh@1187 -- # local i=0 00:15:53.385 11:43:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.385 11:43:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:53.385 11:43:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:55.290 11:43:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:55.290 11:43:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:55.290 11:43:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.290 11:43:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:55.290 11:43:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.290 11:43:28 -- common/autotest_common.sh@1197 -- # return 0 00:15:55.290 11:43:28 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.553 11:43:28 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.553 11:43:28 -- common/autotest_common.sh@1208 -- # local i=0 00:15:55.553 11:43:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:55.553 11:43:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.553 11:43:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:55.553 11:43:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.553 11:43:28 -- common/autotest_common.sh@1220 -- # return 0 00:15:55.553 11:43:28 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.553 11:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.553 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 11:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.553 11:43:28 -- target/rpc.sh@81 -- # seq 1 5 00:15:55.553 11:43:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:55.553 11:43:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:55.553 11:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.553 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 11:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.553 11:43:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.553 11:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.553 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 [2024-11-20 11:43:28.531931] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.553 11:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.553 11:43:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:55.553 11:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.553 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 11:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.553 11:43:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:55.553 11:43:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.553 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 11:43:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.553 11:43:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.813 11:43:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:55.813 11:43:28 -- common/autotest_common.sh@1187 -- # local i=0 00:15:55.813 11:43:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.813 11:43:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:55.813 11:43:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:57.730 11:43:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:57.730 11:43:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:57.730 11:43:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.730 11:43:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:57.730 11:43:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.730 11:43:30 -- common/autotest_common.sh@1197 -- # return 0 00:15:57.730 11:43:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.990 11:43:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.990 11:43:30 -- common/autotest_common.sh@1208 -- # local i=0 00:15:57.990 11:43:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:57.991 11:43:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.991 11:43:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:57.991 11:43:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.991 11:43:30 -- common/autotest_common.sh@1220 -- # return 0 00:15:57.991 11:43:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.991 11:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.991 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 11:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.991 11:43:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.991 11:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.991 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 11:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.991 11:43:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:57.991 11:43:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.991 11:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.991 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 11:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.991 11:43:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.991 11:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.991 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 [2024-11-20 11:43:30.886337] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.991 11:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.991 11:43:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:57.991 11:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.991 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 11:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.991 11:43:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.991 11:43:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.991 11:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 11:43:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.991 11:43:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.250 11:43:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:58.250 11:43:31 -- common/autotest_common.sh@1187 -- # local i=0 00:15:58.250 11:43:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.250 11:43:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:58.250 11:43:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:00.157 11:43:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:00.157 11:43:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:00.157 11:43:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.157 11:43:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:00.157 11:43:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.157 11:43:33 -- common/autotest_common.sh@1197 -- # return 0 00:16:00.157 11:43:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.157 11:43:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.157 11:43:33 -- common/autotest_common.sh@1208 -- # local i=0 00:16:00.157 11:43:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:00.157 11:43:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.157 11:43:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:00.157 11:43:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.416 11:43:33 -- common/autotest_common.sh@1220 -- # return 0 00:16:00.416 11:43:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.416 11:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.416 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 11:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.416 11:43:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:00.416 11:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.416 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 11:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.416 11:43:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:00.416 11:43:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:00.416 11:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.416 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 11:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.416 11:43:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.416 11:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.416 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 [2024-11-20 11:43:33.241187] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.416 11:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.416 11:43:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:00.416 11:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.416 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 11:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.417 11:43:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:00.417 11:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.417 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.417 11:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.417 11:43:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.417 11:43:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:00.417 11:43:33 -- common/autotest_common.sh@1187 -- # local i=0 00:16:00.417 11:43:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.417 11:43:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:00.417 11:43:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:02.955 11:43:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:02.955 11:43:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:02.955 11:43:35 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.955 11:43:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:02.955 11:43:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.955 11:43:35 -- common/autotest_common.sh@1197 -- # return 0 00:16:02.955 11:43:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.955 11:43:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.955 11:43:35 -- common/autotest_common.sh@1208 -- # local i=0 00:16:02.955 11:43:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:02.955 11:43:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.955 11:43:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:02.955 11:43:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.955 11:43:35 -- common/autotest_common.sh@1220 -- # return 0 00:16:02.955 11:43:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:02.955 11:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 11:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.955 11:43:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.955 11:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 11:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.955 11:43:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:02.955 11:43:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:02.955 11:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 11:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.955 11:43:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.955 11:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 [2024-11-20 11:43:35.587983] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.955 11:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.955 11:43:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:02.955 11:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 11:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.955 11:43:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:02.955 11:43:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 11:43:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.956 11:43:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.956 11:43:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:02.956 11:43:35 -- common/autotest_common.sh@1187 -- # local i=0 00:16:02.956 11:43:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.956 11:43:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:02.956 11:43:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:04.861 11:43:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:04.861 11:43:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:04.861 11:43:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.861 11:43:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:04.861 11:43:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.861 11:43:37 -- common/autotest_common.sh@1197 -- # return 0 00:16:04.861 11:43:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.861 11:43:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.861 11:43:37 -- common/autotest_common.sh@1208 -- # local i=0 00:16:04.861 11:43:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:04.861 11:43:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.861 11:43:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:04.861 11:43:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.861 11:43:37 -- common/autotest_common.sh@1220 -- # return 0 00:16:04.861 11:43:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.861 11:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.861 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:04.861 11:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.861 11:43:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.861 11:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.861 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 11:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.121 11:43:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:05.121 11:43:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.121 11:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.121 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 11:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.121 11:43:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.121 11:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.121 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 [2024-11-20 11:43:37.923428] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.121 11:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.121 11:43:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:05.121 11:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.121 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 11:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.121 11:43:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.121 11:43:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.121 11:43:37 -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 11:43:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.121 11:43:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.121 11:43:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:05.121 11:43:38 -- common/autotest_common.sh@1187 -- # local i=0 00:16:05.121 11:43:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.121 11:43:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:05.121 11:43:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:07.664 11:43:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:07.664 11:43:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:07.664 11:43:40 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.664 11:43:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:07.664 11:43:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.664 11:43:40 -- common/autotest_common.sh@1197 -- # return 0 00:16:07.664 11:43:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.664 11:43:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.664 11:43:40 -- common/autotest_common.sh@1208 -- # local i=0 00:16:07.664 11:43:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:07.664 11:43:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.664 11:43:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:07.664 11:43:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.664 11:43:40 -- common/autotest_common.sh@1220 -- # return 0 00:16:07.664 11:43:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@99 -- # seq 1 5 00:16:07.664 11:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:07.664 11:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 [2024-11-20 11:43:40.278612] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:07.664 11:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 [2024-11-20 11:43:40.346530] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.664 11:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.664 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.664 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.664 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:07.665 11:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 [2024-11-20 11:43:40.422446] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:07.665 11:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 [2024-11-20 11:43:40.494413] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:07.665 11:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 [2024-11-20 11:43:40.566378] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:07.665 11:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.665 11:43:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 11:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.665 11:43:40 -- target/rpc.sh@110 -- # stats='{ 00:16:07.665 "poll_groups": [ 00:16:07.665 { 00:16:07.665 "admin_qpairs": 2, 00:16:07.665 "completed_nvme_io": 66, 00:16:07.665 "current_admin_qpairs": 0, 00:16:07.665 "current_io_qpairs": 0, 00:16:07.665 "io_qpairs": 16, 00:16:07.665 "name": "nvmf_tgt_poll_group_0", 00:16:07.665 "pending_bdev_io": 0, 00:16:07.665 "transports": [ 00:16:07.665 { 00:16:07.665 "trtype": "TCP" 00:16:07.665 } 00:16:07.665 ] 00:16:07.665 }, 00:16:07.665 { 00:16:07.665 "admin_qpairs": 3, 00:16:07.665 "completed_nvme_io": 68, 00:16:07.665 "current_admin_qpairs": 0, 00:16:07.665 "current_io_qpairs": 0, 00:16:07.665 "io_qpairs": 17, 00:16:07.665 "name": "nvmf_tgt_poll_group_1", 00:16:07.665 "pending_bdev_io": 0, 00:16:07.665 "transports": [ 00:16:07.665 { 00:16:07.665 "trtype": "TCP" 00:16:07.665 } 00:16:07.665 ] 00:16:07.665 }, 00:16:07.665 { 00:16:07.665 "admin_qpairs": 1, 00:16:07.665 "completed_nvme_io": 120, 00:16:07.665 "current_admin_qpairs": 0, 00:16:07.665 "current_io_qpairs": 0, 00:16:07.665 "io_qpairs": 19, 00:16:07.665 "name": "nvmf_tgt_poll_group_2", 00:16:07.665 "pending_bdev_io": 0, 00:16:07.665 "transports": [ 00:16:07.665 { 00:16:07.665 "trtype": "TCP" 00:16:07.665 } 00:16:07.665 ] 00:16:07.665 }, 00:16:07.665 { 00:16:07.665 "admin_qpairs": 1, 00:16:07.665 "completed_nvme_io": 166, 00:16:07.665 "current_admin_qpairs": 0, 00:16:07.665 "current_io_qpairs": 0, 00:16:07.665 "io_qpairs": 18, 00:16:07.665 "name": "nvmf_tgt_poll_group_3", 00:16:07.665 "pending_bdev_io": 0, 00:16:07.665 "transports": [ 00:16:07.665 { 00:16:07.665 "trtype": "TCP" 00:16:07.665 } 00:16:07.665 ] 00:16:07.665 } 00:16:07.665 ], 00:16:07.665 "tick_rate": 2290000000 00:16:07.665 }' 00:16:07.665 11:43:40 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:07.665 11:43:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:07.665 11:43:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:07.665 11:43:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:07.665 11:43:40 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:07.665 11:43:40 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:07.665 11:43:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:07.665 11:43:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:07.665 11:43:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:07.959 11:43:40 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:16:07.959 11:43:40 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:07.959 11:43:40 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:07.959 11:43:40 -- target/rpc.sh@123 -- # nvmftestfini 00:16:07.959 11:43:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:07.959 11:43:40 -- nvmf/common.sh@116 -- # sync 00:16:07.959 11:43:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:07.959 11:43:40 -- nvmf/common.sh@119 -- # set +e 00:16:07.959 11:43:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:07.959 11:43:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:07.959 rmmod nvme_tcp 00:16:07.959 rmmod nvme_fabrics 00:16:07.959 rmmod nvme_keyring 00:16:07.959 11:43:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:07.959 11:43:40 -- nvmf/common.sh@123 -- # set -e 00:16:07.959 11:43:40 -- nvmf/common.sh@124 -- # return 0 00:16:07.959 11:43:40 -- nvmf/common.sh@477 -- # '[' -n 66256 ']' 00:16:07.959 11:43:40 -- nvmf/common.sh@478 -- # killprocess 66256 00:16:07.959 11:43:40 -- common/autotest_common.sh@936 -- # '[' -z 66256 ']' 00:16:07.959 11:43:40 -- common/autotest_common.sh@940 -- # kill -0 66256 00:16:07.959 11:43:40 -- common/autotest_common.sh@941 -- # uname 00:16:07.959 11:43:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:07.959 11:43:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66256 00:16:07.959 killing process with pid 66256 00:16:07.959 11:43:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:07.959 11:43:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:07.959 11:43:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66256' 00:16:07.959 11:43:40 -- common/autotest_common.sh@955 -- # kill 66256 00:16:07.959 11:43:40 -- common/autotest_common.sh@960 -- # wait 66256 00:16:08.218 11:43:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:08.218 11:43:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:08.218 11:43:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:08.218 11:43:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.218 11:43:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:08.218 11:43:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.218 11:43:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.218 11:43:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.218 11:43:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:08.218 ************************************ 00:16:08.218 END TEST nvmf_rpc 00:16:08.218 ************************************ 00:16:08.218 00:16:08.218 real 0m19.493s 00:16:08.218 user 1m13.900s 00:16:08.218 sys 0m2.260s 00:16:08.218 11:43:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:08.218 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.218 11:43:41 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:08.218 11:43:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:08.218 11:43:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.218 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.218 ************************************ 00:16:08.218 START TEST nvmf_invalid 00:16:08.218 ************************************ 00:16:08.218 11:43:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:08.479 * Looking for test storage... 00:16:08.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:08.479 11:43:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:08.479 11:43:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:08.479 11:43:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:08.479 11:43:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:08.479 11:43:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:08.479 11:43:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:08.479 11:43:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:08.479 11:43:41 -- scripts/common.sh@335 -- # IFS=.-: 00:16:08.479 11:43:41 -- scripts/common.sh@335 -- # read -ra ver1 00:16:08.479 11:43:41 -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.479 11:43:41 -- scripts/common.sh@336 -- # read -ra ver2 00:16:08.479 11:43:41 -- scripts/common.sh@337 -- # local 'op=<' 00:16:08.479 11:43:41 -- scripts/common.sh@339 -- # ver1_l=2 00:16:08.479 11:43:41 -- scripts/common.sh@340 -- # ver2_l=1 00:16:08.479 11:43:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:08.479 11:43:41 -- scripts/common.sh@343 -- # case "$op" in 00:16:08.479 11:43:41 -- scripts/common.sh@344 -- # : 1 00:16:08.479 11:43:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:08.479 11:43:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.479 11:43:41 -- scripts/common.sh@364 -- # decimal 1 00:16:08.479 11:43:41 -- scripts/common.sh@352 -- # local d=1 00:16:08.479 11:43:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.479 11:43:41 -- scripts/common.sh@354 -- # echo 1 00:16:08.479 11:43:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:08.479 11:43:41 -- scripts/common.sh@365 -- # decimal 2 00:16:08.479 11:43:41 -- scripts/common.sh@352 -- # local d=2 00:16:08.479 11:43:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.479 11:43:41 -- scripts/common.sh@354 -- # echo 2 00:16:08.479 11:43:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:08.479 11:43:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:08.479 11:43:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:08.479 11:43:41 -- scripts/common.sh@367 -- # return 0 00:16:08.479 11:43:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.479 11:43:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.479 --rc genhtml_branch_coverage=1 00:16:08.479 --rc genhtml_function_coverage=1 00:16:08.479 --rc genhtml_legend=1 00:16:08.479 --rc geninfo_all_blocks=1 00:16:08.479 --rc geninfo_unexecuted_blocks=1 00:16:08.479 00:16:08.479 ' 00:16:08.479 11:43:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.479 --rc genhtml_branch_coverage=1 00:16:08.479 --rc genhtml_function_coverage=1 00:16:08.479 --rc genhtml_legend=1 00:16:08.479 --rc geninfo_all_blocks=1 00:16:08.479 --rc geninfo_unexecuted_blocks=1 00:16:08.479 00:16:08.479 ' 00:16:08.479 11:43:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.479 --rc genhtml_branch_coverage=1 00:16:08.479 --rc genhtml_function_coverage=1 00:16:08.479 --rc genhtml_legend=1 00:16:08.479 --rc geninfo_all_blocks=1 00:16:08.479 --rc geninfo_unexecuted_blocks=1 00:16:08.479 00:16:08.479 ' 00:16:08.479 11:43:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:08.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.479 --rc genhtml_branch_coverage=1 00:16:08.479 --rc genhtml_function_coverage=1 00:16:08.479 --rc genhtml_legend=1 00:16:08.479 --rc geninfo_all_blocks=1 00:16:08.479 --rc geninfo_unexecuted_blocks=1 00:16:08.479 00:16:08.479 ' 00:16:08.479 11:43:41 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.479 11:43:41 -- nvmf/common.sh@7 -- # uname -s 00:16:08.479 11:43:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.479 11:43:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.479 11:43:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.479 11:43:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.479 11:43:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.479 11:43:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.479 11:43:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.479 11:43:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.479 11:43:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.479 11:43:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.479 11:43:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:16:08.479 11:43:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:16:08.479 11:43:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.479 11:43:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.479 11:43:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.479 11:43:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.479 11:43:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.479 11:43:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.479 11:43:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.479 11:43:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.479 11:43:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.480 11:43:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.480 11:43:41 -- paths/export.sh@5 -- # export PATH 00:16:08.480 11:43:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.480 11:43:41 -- nvmf/common.sh@46 -- # : 0 00:16:08.480 11:43:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.480 11:43:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.480 11:43:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.480 11:43:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.480 11:43:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.480 11:43:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.480 11:43:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.480 11:43:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.480 11:43:41 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:08.480 11:43:41 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.480 11:43:41 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:08.480 11:43:41 -- target/invalid.sh@14 -- # target=foobar 00:16:08.480 11:43:41 -- target/invalid.sh@16 -- # RANDOM=0 00:16:08.480 11:43:41 -- target/invalid.sh@34 -- # nvmftestinit 00:16:08.480 11:43:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.480 11:43:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.480 11:43:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.480 11:43:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.480 11:43:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.480 11:43:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.480 11:43:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.480 11:43:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.480 11:43:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:08.480 11:43:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:08.480 11:43:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:08.480 11:43:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:08.480 11:43:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:08.480 11:43:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:08.480 11:43:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.480 11:43:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.480 11:43:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:08.480 11:43:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:08.480 11:43:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.480 11:43:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.480 11:43:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.480 11:43:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.480 11:43:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.480 11:43:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.480 11:43:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.480 11:43:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.480 11:43:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:08.480 11:43:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:08.480 Cannot find device "nvmf_tgt_br" 00:16:08.480 11:43:41 -- nvmf/common.sh@154 -- # true 00:16:08.480 11:43:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.480 Cannot find device "nvmf_tgt_br2" 00:16:08.480 11:43:41 -- nvmf/common.sh@155 -- # true 00:16:08.480 11:43:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:08.480 11:43:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:08.740 Cannot find device "nvmf_tgt_br" 00:16:08.740 11:43:41 -- nvmf/common.sh@157 -- # true 00:16:08.740 11:43:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:08.740 Cannot find device "nvmf_tgt_br2" 00:16:08.741 11:43:41 -- nvmf/common.sh@158 -- # true 00:16:08.741 11:43:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:08.741 11:43:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:08.741 11:43:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.741 11:43:41 -- nvmf/common.sh@161 -- # true 00:16:08.741 11:43:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.741 11:43:41 -- nvmf/common.sh@162 -- # true 00:16:08.741 11:43:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.741 11:43:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.741 11:43:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.741 11:43:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.741 11:43:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.741 11:43:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.741 11:43:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.741 11:43:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.741 11:43:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.741 11:43:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:08.741 11:43:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:08.741 11:43:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:08.741 11:43:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:08.741 11:43:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.741 11:43:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.741 11:43:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.741 11:43:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:08.741 11:43:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:08.741 11:43:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.741 11:43:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.741 11:43:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.741 11:43:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.741 11:43:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.741 11:43:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:08.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:16:08.741 00:16:08.741 --- 10.0.0.2 ping statistics --- 00:16:08.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.741 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:08.741 11:43:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:08.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:08.741 00:16:08.741 --- 10.0.0.3 ping statistics --- 00:16:08.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.741 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:08.741 11:43:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:08.741 00:16:08.741 --- 10.0.0.1 ping statistics --- 00:16:08.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.741 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:08.741 11:43:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.741 11:43:41 -- nvmf/common.sh@421 -- # return 0 00:16:08.741 11:43:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:08.741 11:43:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.741 11:43:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:08.741 11:43:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:08.741 11:43:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.741 11:43:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:08.741 11:43:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:09.001 11:43:41 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:09.001 11:43:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:09.001 11:43:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.001 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:16:09.001 11:43:41 -- nvmf/common.sh@469 -- # nvmfpid=66780 00:16:09.001 11:43:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.001 11:43:41 -- nvmf/common.sh@470 -- # waitforlisten 66780 00:16:09.001 11:43:41 -- common/autotest_common.sh@829 -- # '[' -z 66780 ']' 00:16:09.001 11:43:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.001 11:43:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.001 11:43:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.001 11:43:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.001 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:16:09.001 [2024-11-20 11:43:41.865777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.001 [2024-11-20 11:43:41.865855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.001 [2024-11-20 11:43:42.004154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.261 [2024-11-20 11:43:42.103759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.261 [2024-11-20 11:43:42.103881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.261 [2024-11-20 11:43:42.103887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.261 [2024-11-20 11:43:42.103892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.261 [2024-11-20 11:43:42.104105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.261 [2024-11-20 11:43:42.104877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.261 [2024-11-20 11:43:42.104946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.261 [2024-11-20 11:43:42.104948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.830 11:43:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.830 11:43:42 -- common/autotest_common.sh@862 -- # return 0 00:16:09.830 11:43:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.830 11:43:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.830 11:43:42 -- common/autotest_common.sh@10 -- # set +x 00:16:09.830 11:43:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.830 11:43:42 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:09.830 11:43:42 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18510 00:16:10.090 [2024-11-20 11:43:42.968073] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:10.090 11:43:42 -- target/invalid.sh@40 -- # out='2024/11/20 11:43:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18510 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:10.090 request: 00:16:10.090 { 00:16:10.090 "method": "nvmf_create_subsystem", 00:16:10.090 "params": { 00:16:10.090 "nqn": "nqn.2016-06.io.spdk:cnode18510", 00:16:10.090 "tgt_name": "foobar" 00:16:10.090 } 00:16:10.090 } 00:16:10.090 Got JSON-RPC error response 00:16:10.090 GoRPCClient: error on JSON-RPC call' 00:16:10.090 11:43:42 -- target/invalid.sh@41 -- # [[ 2024/11/20 11:43:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18510 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:10.090 request: 00:16:10.090 { 00:16:10.090 "method": "nvmf_create_subsystem", 00:16:10.090 "params": { 00:16:10.090 "nqn": "nqn.2016-06.io.spdk:cnode18510", 00:16:10.090 "tgt_name": "foobar" 00:16:10.090 } 00:16:10.090 } 00:16:10.090 Got JSON-RPC error response 00:16:10.090 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:10.090 11:43:42 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:10.090 11:43:42 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8264 00:16:10.349 [2024-11-20 11:43:43.163904] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8264: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:10.349 11:43:43 -- target/invalid.sh@45 -- # out='2024/11/20 11:43:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8264 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:10.349 request: 00:16:10.349 { 00:16:10.349 "method": "nvmf_create_subsystem", 00:16:10.349 "params": { 00:16:10.349 "nqn": "nqn.2016-06.io.spdk:cnode8264", 00:16:10.349 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:10.349 } 00:16:10.349 } 00:16:10.349 Got JSON-RPC error response 00:16:10.349 GoRPCClient: error on JSON-RPC call' 00:16:10.349 11:43:43 -- target/invalid.sh@46 -- # [[ 2024/11/20 11:43:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8264 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:10.349 request: 00:16:10.349 { 00:16:10.349 "method": "nvmf_create_subsystem", 00:16:10.349 "params": { 00:16:10.349 "nqn": "nqn.2016-06.io.spdk:cnode8264", 00:16:10.349 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:10.349 } 00:16:10.349 } 00:16:10.349 Got JSON-RPC error response 00:16:10.349 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:10.349 11:43:43 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:10.349 11:43:43 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4324 00:16:10.349 [2024-11-20 11:43:43.375746] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4324: invalid model number 'SPDK_Controller' 00:16:10.609 11:43:43 -- target/invalid.sh@50 -- # out='2024/11/20 11:43:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode4324], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:10.609 request: 00:16:10.609 { 00:16:10.609 "method": "nvmf_create_subsystem", 00:16:10.609 "params": { 00:16:10.609 "nqn": "nqn.2016-06.io.spdk:cnode4324", 00:16:10.610 "model_number": "SPDK_Controller\u001f" 00:16:10.610 } 00:16:10.610 } 00:16:10.610 Got JSON-RPC error response 00:16:10.610 GoRPCClient: error on JSON-RPC call' 00:16:10.610 11:43:43 -- target/invalid.sh@51 -- # [[ 2024/11/20 11:43:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode4324], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:10.610 request: 00:16:10.610 { 00:16:10.610 "method": "nvmf_create_subsystem", 00:16:10.610 "params": { 00:16:10.610 "nqn": "nqn.2016-06.io.spdk:cnode4324", 00:16:10.610 "model_number": "SPDK_Controller\u001f" 00:16:10.610 } 00:16:10.610 } 00:16:10.610 Got JSON-RPC error response 00:16:10.610 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:10.610 11:43:43 -- target/invalid.sh@54 -- # gen_random_s 21 00:16:10.610 11:43:43 -- target/invalid.sh@19 -- # local length=21 ll 00:16:10.610 11:43:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:10.610 11:43:43 -- target/invalid.sh@21 -- # local chars 00:16:10.610 11:43:43 -- target/invalid.sh@22 -- # local string 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 44 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=, 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 99 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=c 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 81 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=Q 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 36 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='$' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 78 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=N 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 64 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=@ 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 87 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=W 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 56 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=8 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 36 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='$' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 68 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=D 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 33 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='!' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 67 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=C 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 89 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=Y 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 77 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=M 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 123 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='{' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 36 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='$' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 110 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=n 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 53 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=5 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 34 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='"' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 94 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+='^' 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # printf %x 120 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:10.610 11:43:43 -- target/invalid.sh@25 -- # string+=x 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.610 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.610 11:43:43 -- target/invalid.sh@28 -- # [[ , == \- ]] 00:16:10.610 11:43:43 -- target/invalid.sh@31 -- # echo ',cQ$N@W8$D!CYM{$n5"^x' 00:16:10.610 11:43:43 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ',cQ$N@W8$D!CYM{$n5"^x' nqn.2016-06.io.spdk:cnode8639 00:16:10.870 [2024-11-20 11:43:43.767405] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8639: invalid serial number ',cQ$N@W8$D!CYM{$n5"^x' 00:16:10.870 11:43:43 -- target/invalid.sh@54 -- # out='2024/11/20 11:43:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8639 serial_number:,cQ$N@W8$D!CYM{$n5"^x], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ,cQ$N@W8$D!CYM{$n5"^x 00:16:10.870 request: 00:16:10.870 { 00:16:10.870 "method": "nvmf_create_subsystem", 00:16:10.870 "params": { 00:16:10.870 "nqn": "nqn.2016-06.io.spdk:cnode8639", 00:16:10.870 "serial_number": ",cQ$N@W8$D!CYM{$n5\"^x" 00:16:10.870 } 00:16:10.870 } 00:16:10.870 Got JSON-RPC error response 00:16:10.870 GoRPCClient: error on JSON-RPC call' 00:16:10.870 11:43:43 -- target/invalid.sh@55 -- # [[ 2024/11/20 11:43:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8639 serial_number:,cQ$N@W8$D!CYM{$n5"^x], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ,cQ$N@W8$D!CYM{$n5"^x 00:16:10.870 request: 00:16:10.870 { 00:16:10.870 "method": "nvmf_create_subsystem", 00:16:10.870 "params": { 00:16:10.870 "nqn": "nqn.2016-06.io.spdk:cnode8639", 00:16:10.870 "serial_number": ",cQ$N@W8$D!CYM{$n5\"^x" 00:16:10.870 } 00:16:10.870 } 00:16:10.870 Got JSON-RPC error response 00:16:10.870 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:10.870 11:43:43 -- target/invalid.sh@58 -- # gen_random_s 41 00:16:10.871 11:43:43 -- target/invalid.sh@19 -- # local length=41 ll 00:16:10.871 11:43:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:10.871 11:43:43 -- target/invalid.sh@21 -- # local chars 00:16:10.871 11:43:43 -- target/invalid.sh@22 -- # local string 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 35 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+='#' 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 55 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=7 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 84 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=T 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 60 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+='<' 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 52 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=4 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 55 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=7 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 73 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=I 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 116 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=t 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 51 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=3 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 37 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=% 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 127 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=$'\177' 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 70 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=F 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 75 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=K 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 95 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=_ 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 110 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=n 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 47 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=/ 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 58 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=: 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # printf %x 56 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:10.871 11:43:43 -- target/invalid.sh@25 -- # string+=8 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:10.871 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 77 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=M 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 127 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=$'\177' 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 44 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=, 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 100 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=d 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 111 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=o 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 56 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=8 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 66 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=B 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 71 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=G 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 67 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=C 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 43 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=+ 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 107 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # string+=k 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:43 -- target/invalid.sh@25 -- # printf %x 112 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # string+=p 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # printf %x 34 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # string+='"' 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # printf %x 47 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # string+=/ 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # printf %x 95 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # string+=_ 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # printf %x 54 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:11.131 11:43:44 -- target/invalid.sh@25 -- # string+=6 00:16:11.131 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 40 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+='(' 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 41 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+=')' 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 50 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+=2 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 54 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+=6 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 50 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+=2 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 48 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+=0 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # printf %x 82 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:11.132 11:43:44 -- target/invalid.sh@25 -- # string+=R 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:11.132 11:43:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:11.132 11:43:44 -- target/invalid.sh@28 -- # [[ # == \- ]] 00:16:11.132 11:43:44 -- target/invalid.sh@31 -- # echo '#7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R' 00:16:11.132 11:43:44 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '#7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R' nqn.2016-06.io.spdk:cnode28547 00:16:11.429 [2024-11-20 11:43:44.278865] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28547: invalid model number '#7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R' 00:16:11.429 11:43:44 -- target/invalid.sh@58 -- # out='2024/11/20 11:43:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:#7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R nqn:nqn.2016-06.io.spdk:cnode28547], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN #7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R 00:16:11.429 request: 00:16:11.429 { 00:16:11.429 "method": "nvmf_create_subsystem", 00:16:11.429 "params": { 00:16:11.429 "nqn": "nqn.2016-06.io.spdk:cnode28547", 00:16:11.429 "model_number": "#7T<47It3%\u007fFK_n/:8M\u007f,do8BGC+kp\"/_6()2620R" 00:16:11.429 } 00:16:11.429 } 00:16:11.429 Got JSON-RPC error response 00:16:11.429 GoRPCClient: error on JSON-RPC call' 00:16:11.429 11:43:44 -- target/invalid.sh@59 -- # [[ 2024/11/20 11:43:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:#7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R nqn:nqn.2016-06.io.spdk:cnode28547], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN #7T<47It3%FK_n/:8M,do8BGC+kp"/_6()2620R 00:16:11.429 request: 00:16:11.429 { 00:16:11.429 "method": "nvmf_create_subsystem", 00:16:11.429 "params": { 00:16:11.429 "nqn": "nqn.2016-06.io.spdk:cnode28547", 00:16:11.429 "model_number": "#7T<47It3%\u007fFK_n/:8M\u007f,do8BGC+kp\"/_6()2620R" 00:16:11.429 } 00:16:11.429 } 00:16:11.429 Got JSON-RPC error response 00:16:11.429 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:11.429 11:43:44 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:11.689 [2024-11-20 11:43:44.494711] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.689 11:43:44 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:11.951 11:43:44 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:11.951 11:43:44 -- target/invalid.sh@67 -- # echo '' 00:16:11.951 11:43:44 -- target/invalid.sh@67 -- # head -n 1 00:16:11.951 11:43:44 -- target/invalid.sh@67 -- # IP= 00:16:11.951 11:43:44 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:11.951 [2024-11-20 11:43:44.955815] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:11.951 11:43:44 -- target/invalid.sh@69 -- # out='2024/11/20 11:43:44 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:11.951 request: 00:16:11.951 { 00:16:11.951 "method": "nvmf_subsystem_remove_listener", 00:16:11.951 "params": { 00:16:11.951 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:11.951 "listen_address": { 00:16:11.951 "trtype": "tcp", 00:16:11.951 "traddr": "", 00:16:11.952 "trsvcid": "4421" 00:16:11.952 } 00:16:11.952 } 00:16:11.952 } 00:16:11.952 Got JSON-RPC error response 00:16:11.952 GoRPCClient: error on JSON-RPC call' 00:16:11.952 11:43:44 -- target/invalid.sh@70 -- # [[ 2024/11/20 11:43:44 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:11.952 request: 00:16:11.952 { 00:16:11.952 "method": "nvmf_subsystem_remove_listener", 00:16:11.952 "params": { 00:16:11.952 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:11.952 "listen_address": { 00:16:11.952 "trtype": "tcp", 00:16:11.952 "traddr": "", 00:16:11.952 "trsvcid": "4421" 00:16:11.952 } 00:16:11.952 } 00:16:11.952 } 00:16:11.952 Got JSON-RPC error response 00:16:11.952 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:11.952 11:43:44 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30892 -i 0 00:16:12.211 [2024-11-20 11:43:45.163813] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30892: invalid cntlid range [0-65519] 00:16:12.211 11:43:45 -- target/invalid.sh@73 -- # out='2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30892], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:12.211 request: 00:16:12.211 { 00:16:12.211 "method": "nvmf_create_subsystem", 00:16:12.211 "params": { 00:16:12.211 "nqn": "nqn.2016-06.io.spdk:cnode30892", 00:16:12.211 "min_cntlid": 0 00:16:12.211 } 00:16:12.211 } 00:16:12.211 Got JSON-RPC error response 00:16:12.211 GoRPCClient: error on JSON-RPC call' 00:16:12.211 11:43:45 -- target/invalid.sh@74 -- # [[ 2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30892], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:12.211 request: 00:16:12.211 { 00:16:12.211 "method": "nvmf_create_subsystem", 00:16:12.211 "params": { 00:16:12.211 "nqn": "nqn.2016-06.io.spdk:cnode30892", 00:16:12.211 "min_cntlid": 0 00:16:12.211 } 00:16:12.211 } 00:16:12.211 Got JSON-RPC error response 00:16:12.211 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:12.211 11:43:45 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9312 -i 65520 00:16:12.470 [2024-11-20 11:43:45.371730] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9312: invalid cntlid range [65520-65519] 00:16:12.470 11:43:45 -- target/invalid.sh@75 -- # out='2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9312], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:12.470 request: 00:16:12.470 { 00:16:12.470 "method": "nvmf_create_subsystem", 00:16:12.470 "params": { 00:16:12.470 "nqn": "nqn.2016-06.io.spdk:cnode9312", 00:16:12.470 "min_cntlid": 65520 00:16:12.470 } 00:16:12.470 } 00:16:12.470 Got JSON-RPC error response 00:16:12.470 GoRPCClient: error on JSON-RPC call' 00:16:12.470 11:43:45 -- target/invalid.sh@76 -- # [[ 2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9312], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:12.470 request: 00:16:12.470 { 00:16:12.470 "method": "nvmf_create_subsystem", 00:16:12.470 "params": { 00:16:12.470 "nqn": "nqn.2016-06.io.spdk:cnode9312", 00:16:12.470 "min_cntlid": 65520 00:16:12.470 } 00:16:12.470 } 00:16:12.470 Got JSON-RPC error response 00:16:12.470 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:12.470 11:43:45 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3472 -I 0 00:16:12.730 [2024-11-20 11:43:45.575735] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3472: invalid cntlid range [1-0] 00:16:12.730 11:43:45 -- target/invalid.sh@77 -- # out='2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3472], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:12.730 request: 00:16:12.730 { 00:16:12.730 "method": "nvmf_create_subsystem", 00:16:12.730 "params": { 00:16:12.730 "nqn": "nqn.2016-06.io.spdk:cnode3472", 00:16:12.730 "max_cntlid": 0 00:16:12.730 } 00:16:12.730 } 00:16:12.730 Got JSON-RPC error response 00:16:12.730 GoRPCClient: error on JSON-RPC call' 00:16:12.730 11:43:45 -- target/invalid.sh@78 -- # [[ 2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3472], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:12.730 request: 00:16:12.730 { 00:16:12.730 "method": "nvmf_create_subsystem", 00:16:12.730 "params": { 00:16:12.730 "nqn": "nqn.2016-06.io.spdk:cnode3472", 00:16:12.730 "max_cntlid": 0 00:16:12.730 } 00:16:12.730 } 00:16:12.730 Got JSON-RPC error response 00:16:12.730 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:12.730 11:43:45 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21635 -I 65520 00:16:12.989 [2024-11-20 11:43:45.819735] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21635: invalid cntlid range [1-65520] 00:16:12.989 11:43:45 -- target/invalid.sh@79 -- # out='2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21635], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:12.989 request: 00:16:12.989 { 00:16:12.989 "method": "nvmf_create_subsystem", 00:16:12.989 "params": { 00:16:12.989 "nqn": "nqn.2016-06.io.spdk:cnode21635", 00:16:12.989 "max_cntlid": 65520 00:16:12.989 } 00:16:12.989 } 00:16:12.989 Got JSON-RPC error response 00:16:12.989 GoRPCClient: error on JSON-RPC call' 00:16:12.989 11:43:45 -- target/invalid.sh@80 -- # [[ 2024/11/20 11:43:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21635], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:12.989 request: 00:16:12.989 { 00:16:12.989 "method": "nvmf_create_subsystem", 00:16:12.989 "params": { 00:16:12.989 "nqn": "nqn.2016-06.io.spdk:cnode21635", 00:16:12.989 "max_cntlid": 65520 00:16:12.989 } 00:16:12.989 } 00:16:12.989 Got JSON-RPC error response 00:16:12.989 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:12.989 11:43:45 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15656 -i 6 -I 5 00:16:13.252 [2024-11-20 11:43:46.103748] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15656: invalid cntlid range [6-5] 00:16:13.252 11:43:46 -- target/invalid.sh@83 -- # out='2024/11/20 11:43:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15656], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:13.252 request: 00:16:13.252 { 00:16:13.252 "method": "nvmf_create_subsystem", 00:16:13.252 "params": { 00:16:13.252 "nqn": "nqn.2016-06.io.spdk:cnode15656", 00:16:13.252 "min_cntlid": 6, 00:16:13.252 "max_cntlid": 5 00:16:13.252 } 00:16:13.252 } 00:16:13.252 Got JSON-RPC error response 00:16:13.252 GoRPCClient: error on JSON-RPC call' 00:16:13.252 11:43:46 -- target/invalid.sh@84 -- # [[ 2024/11/20 11:43:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15656], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:13.252 request: 00:16:13.252 { 00:16:13.252 "method": "nvmf_create_subsystem", 00:16:13.252 "params": { 00:16:13.252 "nqn": "nqn.2016-06.io.spdk:cnode15656", 00:16:13.252 "min_cntlid": 6, 00:16:13.252 "max_cntlid": 5 00:16:13.252 } 00:16:13.252 } 00:16:13.252 Got JSON-RPC error response 00:16:13.252 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:13.252 11:43:46 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:13.253 11:43:46 -- target/invalid.sh@87 -- # out='request: 00:16:13.253 { 00:16:13.253 "name": "foobar", 00:16:13.253 "method": "nvmf_delete_target", 00:16:13.253 "req_id": 1 00:16:13.253 } 00:16:13.253 Got JSON-RPC error response 00:16:13.253 response: 00:16:13.253 { 00:16:13.253 "code": -32602, 00:16:13.253 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:13.253 }' 00:16:13.253 11:43:46 -- target/invalid.sh@88 -- # [[ request: 00:16:13.253 { 00:16:13.253 "name": "foobar", 00:16:13.253 "method": "nvmf_delete_target", 00:16:13.253 "req_id": 1 00:16:13.253 } 00:16:13.253 Got JSON-RPC error response 00:16:13.253 response: 00:16:13.253 { 00:16:13.253 "code": -32602, 00:16:13.253 "message": "The specified target doesn't exist, cannot delete it." 00:16:13.253 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:13.253 11:43:46 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:13.253 11:43:46 -- target/invalid.sh@91 -- # nvmftestfini 00:16:13.253 11:43:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.253 11:43:46 -- nvmf/common.sh@116 -- # sync 00:16:13.253 11:43:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.253 11:43:46 -- nvmf/common.sh@119 -- # set +e 00:16:13.253 11:43:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.253 11:43:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.253 rmmod nvme_tcp 00:16:13.253 rmmod nvme_fabrics 00:16:13.512 rmmod nvme_keyring 00:16:13.512 11:43:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.512 11:43:46 -- nvmf/common.sh@123 -- # set -e 00:16:13.512 11:43:46 -- nvmf/common.sh@124 -- # return 0 00:16:13.512 11:43:46 -- nvmf/common.sh@477 -- # '[' -n 66780 ']' 00:16:13.512 11:43:46 -- nvmf/common.sh@478 -- # killprocess 66780 00:16:13.512 11:43:46 -- common/autotest_common.sh@936 -- # '[' -z 66780 ']' 00:16:13.512 11:43:46 -- common/autotest_common.sh@940 -- # kill -0 66780 00:16:13.512 11:43:46 -- common/autotest_common.sh@941 -- # uname 00:16:13.512 11:43:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.512 11:43:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66780 00:16:13.512 11:43:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.512 11:43:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.512 11:43:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66780' 00:16:13.512 killing process with pid 66780 00:16:13.512 11:43:46 -- common/autotest_common.sh@955 -- # kill 66780 00:16:13.512 11:43:46 -- common/autotest_common.sh@960 -- # wait 66780 00:16:13.771 11:43:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.771 11:43:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.771 11:43:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.771 11:43:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.771 11:43:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.771 11:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.771 11:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.771 11:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.771 11:43:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:13.771 00:16:13.771 real 0m5.433s 00:16:13.771 user 0m20.746s 00:16:13.771 sys 0m1.335s 00:16:13.771 11:43:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:13.771 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:16:13.771 ************************************ 00:16:13.771 END TEST nvmf_invalid 00:16:13.771 ************************************ 00:16:13.771 11:43:46 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:13.771 11:43:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.771 11:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.771 11:43:46 -- common/autotest_common.sh@10 -- # set +x 00:16:13.771 ************************************ 00:16:13.771 START TEST nvmf_abort 00:16:13.771 ************************************ 00:16:13.771 11:43:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:13.771 * Looking for test storage... 00:16:13.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:13.771 11:43:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:13.771 11:43:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:13.771 11:43:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:14.028 11:43:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:14.028 11:43:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:14.028 11:43:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:14.028 11:43:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:14.028 11:43:46 -- scripts/common.sh@335 -- # IFS=.-: 00:16:14.028 11:43:46 -- scripts/common.sh@335 -- # read -ra ver1 00:16:14.028 11:43:46 -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.028 11:43:46 -- scripts/common.sh@336 -- # read -ra ver2 00:16:14.028 11:43:46 -- scripts/common.sh@337 -- # local 'op=<' 00:16:14.028 11:43:46 -- scripts/common.sh@339 -- # ver1_l=2 00:16:14.028 11:43:46 -- scripts/common.sh@340 -- # ver2_l=1 00:16:14.028 11:43:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:14.028 11:43:46 -- scripts/common.sh@343 -- # case "$op" in 00:16:14.028 11:43:46 -- scripts/common.sh@344 -- # : 1 00:16:14.028 11:43:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:14.028 11:43:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.028 11:43:46 -- scripts/common.sh@364 -- # decimal 1 00:16:14.028 11:43:46 -- scripts/common.sh@352 -- # local d=1 00:16:14.028 11:43:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.028 11:43:46 -- scripts/common.sh@354 -- # echo 1 00:16:14.028 11:43:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:14.028 11:43:46 -- scripts/common.sh@365 -- # decimal 2 00:16:14.028 11:43:46 -- scripts/common.sh@352 -- # local d=2 00:16:14.028 11:43:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.028 11:43:46 -- scripts/common.sh@354 -- # echo 2 00:16:14.028 11:43:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:14.028 11:43:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:14.029 11:43:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:14.029 11:43:46 -- scripts/common.sh@367 -- # return 0 00:16:14.029 11:43:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.029 11:43:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.029 --rc genhtml_branch_coverage=1 00:16:14.029 --rc genhtml_function_coverage=1 00:16:14.029 --rc genhtml_legend=1 00:16:14.029 --rc geninfo_all_blocks=1 00:16:14.029 --rc geninfo_unexecuted_blocks=1 00:16:14.029 00:16:14.029 ' 00:16:14.029 11:43:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.029 --rc genhtml_branch_coverage=1 00:16:14.029 --rc genhtml_function_coverage=1 00:16:14.029 --rc genhtml_legend=1 00:16:14.029 --rc geninfo_all_blocks=1 00:16:14.029 --rc geninfo_unexecuted_blocks=1 00:16:14.029 00:16:14.029 ' 00:16:14.029 11:43:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.029 --rc genhtml_branch_coverage=1 00:16:14.029 --rc genhtml_function_coverage=1 00:16:14.029 --rc genhtml_legend=1 00:16:14.029 --rc geninfo_all_blocks=1 00:16:14.029 --rc geninfo_unexecuted_blocks=1 00:16:14.029 00:16:14.029 ' 00:16:14.029 11:43:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.029 --rc genhtml_branch_coverage=1 00:16:14.029 --rc genhtml_function_coverage=1 00:16:14.029 --rc genhtml_legend=1 00:16:14.029 --rc geninfo_all_blocks=1 00:16:14.029 --rc geninfo_unexecuted_blocks=1 00:16:14.029 00:16:14.029 ' 00:16:14.029 11:43:46 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.029 11:43:46 -- nvmf/common.sh@7 -- # uname -s 00:16:14.029 11:43:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.029 11:43:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.029 11:43:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.029 11:43:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.029 11:43:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.029 11:43:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.029 11:43:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.029 11:43:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.029 11:43:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.029 11:43:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.029 11:43:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:16:14.029 11:43:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:16:14.029 11:43:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.029 11:43:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.029 11:43:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.029 11:43:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.029 11:43:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.029 11:43:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.029 11:43:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.029 11:43:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.029 11:43:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.029 11:43:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.029 11:43:46 -- paths/export.sh@5 -- # export PATH 00:16:14.029 11:43:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.029 11:43:46 -- nvmf/common.sh@46 -- # : 0 00:16:14.029 11:43:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.029 11:43:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.029 11:43:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.029 11:43:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.029 11:43:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.029 11:43:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.029 11:43:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.029 11:43:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.029 11:43:46 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.029 11:43:46 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:14.029 11:43:46 -- target/abort.sh@14 -- # nvmftestinit 00:16:14.029 11:43:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.029 11:43:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.029 11:43:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.029 11:43:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.029 11:43:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.029 11:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.029 11:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.029 11:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.029 11:43:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.029 11:43:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.029 11:43:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.029 11:43:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.029 11:43:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.029 11:43:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.029 11:43:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.029 11:43:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.029 11:43:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.029 11:43:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.029 11:43:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.029 11:43:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.029 11:43:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.029 11:43:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.029 11:43:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.029 11:43:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.029 11:43:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.029 11:43:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.029 11:43:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.029 11:43:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.029 Cannot find device "nvmf_tgt_br" 00:16:14.029 11:43:46 -- nvmf/common.sh@154 -- # true 00:16:14.029 11:43:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.029 Cannot find device "nvmf_tgt_br2" 00:16:14.029 11:43:46 -- nvmf/common.sh@155 -- # true 00:16:14.029 11:43:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.029 11:43:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.029 Cannot find device "nvmf_tgt_br" 00:16:14.029 11:43:47 -- nvmf/common.sh@157 -- # true 00:16:14.029 11:43:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.029 Cannot find device "nvmf_tgt_br2" 00:16:14.029 11:43:47 -- nvmf/common.sh@158 -- # true 00:16:14.029 11:43:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.029 11:43:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.285 11:43:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.285 11:43:47 -- nvmf/common.sh@161 -- # true 00:16:14.285 11:43:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.285 11:43:47 -- nvmf/common.sh@162 -- # true 00:16:14.285 11:43:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.285 11:43:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.285 11:43:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.285 11:43:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.285 11:43:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.285 11:43:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.285 11:43:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.285 11:43:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.285 11:43:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.285 11:43:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.285 11:43:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.285 11:43:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.285 11:43:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.285 11:43:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.285 11:43:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.285 11:43:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.285 11:43:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.285 11:43:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.285 11:43:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.285 11:43:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.285 11:43:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.285 11:43:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.285 11:43:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.285 11:43:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:14.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:16:14.285 00:16:14.285 --- 10.0.0.2 ping statistics --- 00:16:14.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.285 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:14.285 11:43:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:14.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:14.285 00:16:14.285 --- 10.0.0.3 ping statistics --- 00:16:14.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.285 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:14.285 11:43:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:14.285 00:16:14.285 --- 10.0.0.1 ping statistics --- 00:16:14.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.285 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:14.285 11:43:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.285 11:43:47 -- nvmf/common.sh@421 -- # return 0 00:16:14.285 11:43:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.285 11:43:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.285 11:43:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.285 11:43:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.285 11:43:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.285 11:43:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.285 11:43:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.285 11:43:47 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:14.285 11:43:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.285 11:43:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.285 11:43:47 -- common/autotest_common.sh@10 -- # set +x 00:16:14.285 11:43:47 -- nvmf/common.sh@469 -- # nvmfpid=67287 00:16:14.285 11:43:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:14.285 11:43:47 -- nvmf/common.sh@470 -- # waitforlisten 67287 00:16:14.285 11:43:47 -- common/autotest_common.sh@829 -- # '[' -z 67287 ']' 00:16:14.285 11:43:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.285 11:43:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.285 11:43:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.285 11:43:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.285 11:43:47 -- common/autotest_common.sh@10 -- # set +x 00:16:14.285 [2024-11-20 11:43:47.324384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.285 [2024-11-20 11:43:47.324455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.542 [2024-11-20 11:43:47.463294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:14.542 [2024-11-20 11:43:47.575254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.542 [2024-11-20 11:43:47.575376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.542 [2024-11-20 11:43:47.575383] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.542 [2024-11-20 11:43:47.575389] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.542 [2024-11-20 11:43:47.575535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.542 [2024-11-20 11:43:47.575578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.542 [2024-11-20 11:43:47.575580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.507 11:43:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.507 11:43:48 -- common/autotest_common.sh@862 -- # return 0 00:16:15.507 11:43:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.507 11:43:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.507 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.507 11:43:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.507 11:43:48 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:15.507 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.507 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.507 [2024-11-20 11:43:48.297146] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.507 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.507 11:43:48 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:15.507 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.507 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.507 Malloc0 00:16:15.507 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.507 11:43:48 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:15.507 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.507 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.507 Delay0 00:16:15.507 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.507 11:43:48 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:15.507 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.507 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.507 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 11:43:48 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:15.508 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.508 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 11:43:48 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:15.508 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.508 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 [2024-11-20 11:43:48.371423] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.508 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 11:43:48 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.508 11:43:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.508 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 11:43:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 11:43:48 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:15.765 [2024-11-20 11:43:48.552499] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:17.683 Initializing NVMe Controllers 00:16:17.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:17.683 controller IO queue size 128 less than required 00:16:17.683 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:17.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:17.683 Initialization complete. Launching workers. 00:16:17.683 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 47124 00:16:17.683 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47189, failed to submit 62 00:16:17.683 success 47124, unsuccess 65, failed 0 00:16:17.683 11:43:50 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:17.683 11:43:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.683 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:16:17.683 11:43:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.683 11:43:50 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:17.683 11:43:50 -- target/abort.sh@38 -- # nvmftestfini 00:16:17.683 11:43:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:17.683 11:43:50 -- nvmf/common.sh@116 -- # sync 00:16:17.683 11:43:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:17.683 11:43:50 -- nvmf/common.sh@119 -- # set +e 00:16:17.683 11:43:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:17.683 11:43:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:17.683 rmmod nvme_tcp 00:16:17.683 rmmod nvme_fabrics 00:16:17.683 rmmod nvme_keyring 00:16:17.683 11:43:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:17.683 11:43:50 -- nvmf/common.sh@123 -- # set -e 00:16:17.683 11:43:50 -- nvmf/common.sh@124 -- # return 0 00:16:17.683 11:43:50 -- nvmf/common.sh@477 -- # '[' -n 67287 ']' 00:16:17.683 11:43:50 -- nvmf/common.sh@478 -- # killprocess 67287 00:16:17.683 11:43:50 -- common/autotest_common.sh@936 -- # '[' -z 67287 ']' 00:16:17.683 11:43:50 -- common/autotest_common.sh@940 -- # kill -0 67287 00:16:17.683 11:43:50 -- common/autotest_common.sh@941 -- # uname 00:16:17.683 11:43:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.683 11:43:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67287 00:16:17.940 11:43:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:17.940 11:43:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:17.940 killing process with pid 67287 00:16:17.940 11:43:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67287' 00:16:17.940 11:43:50 -- common/autotest_common.sh@955 -- # kill 67287 00:16:17.940 11:43:50 -- common/autotest_common.sh@960 -- # wait 67287 00:16:17.940 11:43:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:17.940 11:43:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:17.940 11:43:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:17.940 11:43:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.940 11:43:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:17.940 11:43:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.940 11:43:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.940 11:43:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.197 11:43:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:18.197 ************************************ 00:16:18.197 END TEST nvmf_abort 00:16:18.197 ************************************ 00:16:18.197 00:16:18.197 real 0m4.328s 00:16:18.197 user 0m12.200s 00:16:18.197 sys 0m1.009s 00:16:18.197 11:43:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:18.197 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:18.197 11:43:51 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:18.197 11:43:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:18.197 11:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.197 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:18.197 ************************************ 00:16:18.197 START TEST nvmf_ns_hotplug_stress 00:16:18.197 ************************************ 00:16:18.197 11:43:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:18.197 * Looking for test storage... 00:16:18.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.197 11:43:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:18.197 11:43:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:18.197 11:43:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:18.455 11:43:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:18.455 11:43:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:18.455 11:43:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:18.455 11:43:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:18.455 11:43:51 -- scripts/common.sh@335 -- # IFS=.-: 00:16:18.455 11:43:51 -- scripts/common.sh@335 -- # read -ra ver1 00:16:18.455 11:43:51 -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.455 11:43:51 -- scripts/common.sh@336 -- # read -ra ver2 00:16:18.455 11:43:51 -- scripts/common.sh@337 -- # local 'op=<' 00:16:18.455 11:43:51 -- scripts/common.sh@339 -- # ver1_l=2 00:16:18.455 11:43:51 -- scripts/common.sh@340 -- # ver2_l=1 00:16:18.455 11:43:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:18.455 11:43:51 -- scripts/common.sh@343 -- # case "$op" in 00:16:18.455 11:43:51 -- scripts/common.sh@344 -- # : 1 00:16:18.455 11:43:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:18.455 11:43:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.455 11:43:51 -- scripts/common.sh@364 -- # decimal 1 00:16:18.455 11:43:51 -- scripts/common.sh@352 -- # local d=1 00:16:18.455 11:43:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.455 11:43:51 -- scripts/common.sh@354 -- # echo 1 00:16:18.455 11:43:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:18.455 11:43:51 -- scripts/common.sh@365 -- # decimal 2 00:16:18.455 11:43:51 -- scripts/common.sh@352 -- # local d=2 00:16:18.455 11:43:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.455 11:43:51 -- scripts/common.sh@354 -- # echo 2 00:16:18.455 11:43:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:18.455 11:43:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:18.455 11:43:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:18.455 11:43:51 -- scripts/common.sh@367 -- # return 0 00:16:18.455 11:43:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.455 11:43:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.455 --rc genhtml_branch_coverage=1 00:16:18.455 --rc genhtml_function_coverage=1 00:16:18.455 --rc genhtml_legend=1 00:16:18.455 --rc geninfo_all_blocks=1 00:16:18.455 --rc geninfo_unexecuted_blocks=1 00:16:18.455 00:16:18.455 ' 00:16:18.455 11:43:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.455 --rc genhtml_branch_coverage=1 00:16:18.455 --rc genhtml_function_coverage=1 00:16:18.455 --rc genhtml_legend=1 00:16:18.455 --rc geninfo_all_blocks=1 00:16:18.455 --rc geninfo_unexecuted_blocks=1 00:16:18.455 00:16:18.455 ' 00:16:18.455 11:43:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.455 --rc genhtml_branch_coverage=1 00:16:18.455 --rc genhtml_function_coverage=1 00:16:18.455 --rc genhtml_legend=1 00:16:18.455 --rc geninfo_all_blocks=1 00:16:18.455 --rc geninfo_unexecuted_blocks=1 00:16:18.455 00:16:18.455 ' 00:16:18.455 11:43:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.455 --rc genhtml_branch_coverage=1 00:16:18.455 --rc genhtml_function_coverage=1 00:16:18.455 --rc genhtml_legend=1 00:16:18.455 --rc geninfo_all_blocks=1 00:16:18.455 --rc geninfo_unexecuted_blocks=1 00:16:18.455 00:16:18.455 ' 00:16:18.455 11:43:51 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.455 11:43:51 -- nvmf/common.sh@7 -- # uname -s 00:16:18.455 11:43:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.455 11:43:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.455 11:43:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.455 11:43:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.455 11:43:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.455 11:43:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.455 11:43:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.455 11:43:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.455 11:43:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.455 11:43:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.455 11:43:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:16:18.455 11:43:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:16:18.455 11:43:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.455 11:43:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.455 11:43:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.455 11:43:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.455 11:43:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.455 11:43:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.455 11:43:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.455 11:43:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.455 11:43:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.455 11:43:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.455 11:43:51 -- paths/export.sh@5 -- # export PATH 00:16:18.455 11:43:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.455 11:43:51 -- nvmf/common.sh@46 -- # : 0 00:16:18.455 11:43:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:18.455 11:43:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:18.455 11:43:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:18.455 11:43:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.455 11:43:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.455 11:43:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:18.455 11:43:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:18.455 11:43:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:18.455 11:43:51 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.455 11:43:51 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:18.455 11:43:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:18.455 11:43:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.455 11:43:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:18.455 11:43:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:18.455 11:43:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:18.455 11:43:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.456 11:43:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.456 11:43:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.456 11:43:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:18.456 11:43:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:18.456 11:43:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:18.456 11:43:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:18.456 11:43:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:18.456 11:43:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:18.456 11:43:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.456 11:43:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.456 11:43:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:18.456 11:43:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:18.456 11:43:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.456 11:43:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.456 11:43:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.456 11:43:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.456 11:43:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.456 11:43:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.456 11:43:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.456 11:43:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.456 11:43:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:18.456 11:43:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:18.456 Cannot find device "nvmf_tgt_br" 00:16:18.456 11:43:51 -- nvmf/common.sh@154 -- # true 00:16:18.456 11:43:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.456 Cannot find device "nvmf_tgt_br2" 00:16:18.456 11:43:51 -- nvmf/common.sh@155 -- # true 00:16:18.456 11:43:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:18.456 11:43:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:18.456 Cannot find device "nvmf_tgt_br" 00:16:18.456 11:43:51 -- nvmf/common.sh@157 -- # true 00:16:18.456 11:43:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:18.456 Cannot find device "nvmf_tgt_br2" 00:16:18.456 11:43:51 -- nvmf/common.sh@158 -- # true 00:16:18.456 11:43:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:18.456 11:43:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:18.714 11:43:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.714 11:43:51 -- nvmf/common.sh@161 -- # true 00:16:18.714 11:43:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.714 11:43:51 -- nvmf/common.sh@162 -- # true 00:16:18.714 11:43:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.714 11:43:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.714 11:43:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.714 11:43:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.714 11:43:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.714 11:43:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.714 11:43:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.714 11:43:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:18.714 11:43:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:18.714 11:43:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:18.714 11:43:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:18.714 11:43:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:18.714 11:43:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:18.714 11:43:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:18.714 11:43:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:18.714 11:43:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:18.714 11:43:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:18.714 11:43:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:18.714 11:43:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:18.714 11:43:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:18.714 11:43:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:18.714 11:43:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:18.714 11:43:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:18.714 11:43:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:18.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:18.714 00:16:18.714 --- 10.0.0.2 ping statistics --- 00:16:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.714 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:18.714 11:43:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:18.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:18.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:18.714 00:16:18.714 --- 10.0.0.3 ping statistics --- 00:16:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.714 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:18.714 11:43:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:18.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:18.714 00:16:18.714 --- 10.0.0.1 ping statistics --- 00:16:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.714 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:18.714 11:43:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.714 11:43:51 -- nvmf/common.sh@421 -- # return 0 00:16:18.714 11:43:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:18.714 11:43:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.714 11:43:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:18.714 11:43:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:18.714 11:43:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.714 11:43:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:18.714 11:43:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:18.714 11:43:51 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:18.714 11:43:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:18.714 11:43:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.714 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:18.714 11:43:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:18.714 11:43:51 -- nvmf/common.sh@469 -- # nvmfpid=67559 00:16:18.714 11:43:51 -- nvmf/common.sh@470 -- # waitforlisten 67559 00:16:18.714 11:43:51 -- common/autotest_common.sh@829 -- # '[' -z 67559 ']' 00:16:18.714 11:43:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.714 11:43:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.714 11:43:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.714 11:43:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.714 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:18.714 [2024-11-20 11:43:51.714377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:18.714 [2024-11-20 11:43:51.714458] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.972 [2024-11-20 11:43:51.860982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.972 [2024-11-20 11:43:51.947452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:18.972 [2024-11-20 11:43:51.947591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.972 [2024-11-20 11:43:51.947598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.972 [2024-11-20 11:43:51.947603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.972 [2024-11-20 11:43:51.947756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.972 [2024-11-20 11:43:51.948203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.972 [2024-11-20 11:43:51.948204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.556 11:43:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.556 11:43:52 -- common/autotest_common.sh@862 -- # return 0 00:16:19.556 11:43:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:19.556 11:43:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:19.556 11:43:52 -- common/autotest_common.sh@10 -- # set +x 00:16:19.813 11:43:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.813 11:43:52 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:19.813 11:43:52 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:19.813 [2024-11-20 11:43:52.811515] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.813 11:43:52 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:20.070 11:43:53 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.328 [2024-11-20 11:43:53.231746] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.328 11:43:53 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:20.587 11:43:53 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:20.845 Malloc0 00:16:20.845 11:43:53 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:21.103 Delay0 00:16:21.103 11:43:53 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.362 11:43:54 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:21.621 NULL1 00:16:21.621 11:43:54 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:21.621 11:43:54 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:21.621 11:43:54 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67691 00:16:21.621 11:43:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:21.621 11:43:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.997 Read completed with error (sct=0, sc=11) 00:16:22.997 11:43:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.255 11:43:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:23.255 11:43:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:23.514 true 00:16:23.514 11:43:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:23.514 11:43:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.083 11:43:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.344 11:43:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:24.344 11:43:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:24.606 true 00:16:24.606 11:43:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:24.606 11:43:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.865 11:43:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.124 11:43:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:25.124 11:43:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:25.383 true 00:16:25.383 11:43:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:25.383 11:43:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.318 11:43:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.318 11:43:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:26.318 11:43:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:26.575 true 00:16:26.575 11:43:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:26.575 11:43:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.834 11:43:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.092 11:44:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:27.092 11:44:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:27.351 true 00:16:27.351 11:44:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:27.351 11:44:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.407 11:44:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.407 11:44:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:28.407 11:44:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:28.665 true 00:16:28.665 11:44:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:28.665 11:44:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.923 11:44:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.182 11:44:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:29.182 11:44:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:29.441 true 00:16:29.441 11:44:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:29.441 11:44:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.376 11:44:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.376 11:44:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:30.376 11:44:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:30.635 true 00:16:30.635 11:44:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:30.635 11:44:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.894 11:44:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.151 11:44:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:31.151 11:44:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:31.409 true 00:16:31.409 11:44:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:31.409 11:44:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.344 11:44:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.603 11:44:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:32.603 11:44:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:32.603 true 00:16:32.603 11:44:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:32.603 11:44:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.862 11:44:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.121 11:44:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:33.121 11:44:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:33.380 true 00:16:33.380 11:44:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:33.380 11:44:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.320 11:44:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.581 11:44:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:34.581 11:44:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:34.581 true 00:16:34.581 11:44:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:34.581 11:44:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.848 11:44:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.115 11:44:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:35.115 11:44:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:35.374 true 00:16:35.374 11:44:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:35.374 11:44:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.310 11:44:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.569 11:44:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:36.569 11:44:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:36.569 true 00:16:36.569 11:44:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:36.569 11:44:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.828 11:44:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.086 11:44:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:37.086 11:44:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:37.345 true 00:16:37.345 11:44:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:37.345 11:44:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.284 11:44:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.544 11:44:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:38.544 11:44:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:38.804 true 00:16:38.804 11:44:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:38.804 11:44:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.804 11:44:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.064 11:44:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:39.064 11:44:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:39.323 true 00:16:39.323 11:44:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:39.323 11:44:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.272 11:44:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.555 11:44:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:40.555 11:44:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:40.814 true 00:16:40.814 11:44:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:40.814 11:44:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.814 11:44:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.073 11:44:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:41.073 11:44:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:41.333 true 00:16:41.333 11:44:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:41.333 11:44:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.272 11:44:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.532 11:44:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:42.532 11:44:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:42.792 true 00:16:42.792 11:44:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:42.792 11:44:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.792 11:44:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.053 11:44:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:43.053 11:44:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:43.312 true 00:16:43.312 11:44:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:43.312 11:44:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.249 11:44:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.509 11:44:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:44.509 11:44:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:44.768 true 00:16:44.768 11:44:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:44.768 11:44:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.028 11:44:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:45.028 11:44:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:45.028 11:44:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:45.287 true 00:16:45.287 11:44:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:45.287 11:44:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.226 11:44:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.486 11:44:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:46.486 11:44:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:46.745 true 00:16:46.745 11:44:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:46.745 11:44:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.006 11:44:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.267 11:44:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:47.267 11:44:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:47.267 true 00:16:47.267 11:44:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:47.267 11:44:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.204 11:44:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.463 11:44:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:48.463 11:44:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:48.723 true 00:16:48.723 11:44:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:48.723 11:44:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.983 11:44:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.243 11:44:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:49.243 11:44:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:49.243 true 00:16:49.243 11:44:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:49.243 11:44:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.635 11:44:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:50.635 11:44:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:50.635 11:44:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:50.895 true 00:16:50.895 11:44:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:50.895 11:44:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.155 11:44:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:51.155 11:44:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:51.155 11:44:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:51.415 true 00:16:51.415 11:44:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:51.415 11:44:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:52.354 Initializing NVMe Controllers 00:16:52.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:52.354 Controller IO queue size 128, less than required. 00:16:52.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:52.354 Controller IO queue size 128, less than required. 00:16:52.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:52.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:52.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:52.354 Initialization complete. Launching workers. 00:16:52.354 ======================================================== 00:16:52.354 Latency(us) 00:16:52.354 Device Information : IOPS MiB/s Average min max 00:16:52.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 331.33 0.16 220062.06 3356.20 1105974.75 00:16:52.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14830.03 7.24 8631.06 2721.64 535948.18 00:16:52.354 ======================================================== 00:16:52.354 Total : 15161.37 7.40 13251.63 2721.64 1105974.75 00:16:52.354 00:16:52.354 11:44:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:52.613 11:44:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:52.613 11:44:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:52.871 true 00:16:52.871 11:44:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67691 00:16:52.871 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67691) - No such process 00:16:52.871 11:44:25 -- target/ns_hotplug_stress.sh@53 -- # wait 67691 00:16:52.871 11:44:25 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.130 11:44:25 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:53.390 null0 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:53.390 11:44:26 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:53.650 null1 00:16:53.650 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:53.650 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:53.650 11:44:26 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:53.909 null2 00:16:53.909 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:53.909 11:44:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:53.909 11:44:26 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:54.168 null3 00:16:54.168 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:54.168 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:54.168 11:44:27 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:54.428 null4 00:16:54.428 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:54.428 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:54.428 11:44:27 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:54.687 null5 00:16:54.687 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:54.687 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:54.687 11:44:27 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:54.687 null6 00:16:54.687 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:54.687 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:54.687 11:44:27 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:54.945 null7 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@66 -- # wait 68742 68744 68745 68747 68749 68751 68753 68756 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.945 11:44:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:55.204 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:55.205 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.205 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:55.205 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:55.205 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:55.205 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.464 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.465 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.725 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.984 11:44:28 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:56.245 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:56.505 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.766 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:57.027 11:44:29 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:57.027 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:57.027 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:57.027 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.288 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.547 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.807 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.808 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:58.067 11:44:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:58.067 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:58.067 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:58.067 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.327 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:58.586 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:58.587 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:58.587 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:58.587 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:58.846 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:58.847 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:59.107 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:59.107 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.107 11:44:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.107 11:44:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:59.107 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:59.107 11:44:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:59.107 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:59.107 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:59.107 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.107 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.107 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:59.107 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:59.366 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.626 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:59.887 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:00.146 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:00.147 11:44:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:00.147 11:44:32 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:00.147 11:44:32 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:17:00.147 11:44:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:00.147 11:44:32 -- nvmf/common.sh@116 -- # sync 00:17:00.147 11:44:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:00.147 11:44:32 -- nvmf/common.sh@119 -- # set +e 00:17:00.147 11:44:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:00.147 11:44:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:00.147 rmmod nvme_tcp 00:17:00.147 rmmod nvme_fabrics 00:17:00.147 rmmod nvme_keyring 00:17:00.147 11:44:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:00.147 11:44:33 -- nvmf/common.sh@123 -- # set -e 00:17:00.147 11:44:33 -- nvmf/common.sh@124 -- # return 0 00:17:00.147 11:44:33 -- nvmf/common.sh@477 -- # '[' -n 67559 ']' 00:17:00.147 11:44:33 -- nvmf/common.sh@478 -- # killprocess 67559 00:17:00.147 11:44:33 -- common/autotest_common.sh@936 -- # '[' -z 67559 ']' 00:17:00.147 11:44:33 -- common/autotest_common.sh@940 -- # kill -0 67559 00:17:00.147 11:44:33 -- common/autotest_common.sh@941 -- # uname 00:17:00.147 11:44:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.147 11:44:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67559 00:17:00.147 11:44:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:00.147 11:44:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:00.147 killing process with pid 67559 00:17:00.147 11:44:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67559' 00:17:00.147 11:44:33 -- common/autotest_common.sh@955 -- # kill 67559 00:17:00.147 11:44:33 -- common/autotest_common.sh@960 -- # wait 67559 00:17:00.406 11:44:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:00.406 11:44:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:00.406 11:44:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:00.406 11:44:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.406 11:44:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:00.406 11:44:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.406 11:44:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.406 11:44:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.665 11:44:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:00.665 00:17:00.665 real 0m42.387s 00:17:00.665 user 3m18.943s 00:17:00.665 sys 0m11.394s 00:17:00.665 11:44:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:00.665 11:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.665 ************************************ 00:17:00.665 END TEST nvmf_ns_hotplug_stress 00:17:00.665 ************************************ 00:17:00.666 11:44:33 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:00.666 11:44:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:00.666 11:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.666 11:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.666 ************************************ 00:17:00.666 START TEST nvmf_connect_stress 00:17:00.666 ************************************ 00:17:00.666 11:44:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:00.666 * Looking for test storage... 00:17:00.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:00.666 11:44:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:00.666 11:44:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:00.666 11:44:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:00.925 11:44:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:00.925 11:44:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:00.925 11:44:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:00.925 11:44:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:00.925 11:44:33 -- scripts/common.sh@335 -- # IFS=.-: 00:17:00.925 11:44:33 -- scripts/common.sh@335 -- # read -ra ver1 00:17:00.925 11:44:33 -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.925 11:44:33 -- scripts/common.sh@336 -- # read -ra ver2 00:17:00.925 11:44:33 -- scripts/common.sh@337 -- # local 'op=<' 00:17:00.925 11:44:33 -- scripts/common.sh@339 -- # ver1_l=2 00:17:00.925 11:44:33 -- scripts/common.sh@340 -- # ver2_l=1 00:17:00.925 11:44:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:00.925 11:44:33 -- scripts/common.sh@343 -- # case "$op" in 00:17:00.925 11:44:33 -- scripts/common.sh@344 -- # : 1 00:17:00.925 11:44:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:00.925 11:44:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.925 11:44:33 -- scripts/common.sh@364 -- # decimal 1 00:17:00.925 11:44:33 -- scripts/common.sh@352 -- # local d=1 00:17:00.925 11:44:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.925 11:44:33 -- scripts/common.sh@354 -- # echo 1 00:17:00.925 11:44:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:00.925 11:44:33 -- scripts/common.sh@365 -- # decimal 2 00:17:00.925 11:44:33 -- scripts/common.sh@352 -- # local d=2 00:17:00.925 11:44:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.925 11:44:33 -- scripts/common.sh@354 -- # echo 2 00:17:00.925 11:44:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:00.925 11:44:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:00.925 11:44:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:00.925 11:44:33 -- scripts/common.sh@367 -- # return 0 00:17:00.925 11:44:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.925 11:44:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:00.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.925 --rc genhtml_branch_coverage=1 00:17:00.925 --rc genhtml_function_coverage=1 00:17:00.925 --rc genhtml_legend=1 00:17:00.925 --rc geninfo_all_blocks=1 00:17:00.925 --rc geninfo_unexecuted_blocks=1 00:17:00.925 00:17:00.925 ' 00:17:00.925 11:44:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:00.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.925 --rc genhtml_branch_coverage=1 00:17:00.925 --rc genhtml_function_coverage=1 00:17:00.925 --rc genhtml_legend=1 00:17:00.925 --rc geninfo_all_blocks=1 00:17:00.925 --rc geninfo_unexecuted_blocks=1 00:17:00.925 00:17:00.925 ' 00:17:00.925 11:44:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:00.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.925 --rc genhtml_branch_coverage=1 00:17:00.925 --rc genhtml_function_coverage=1 00:17:00.925 --rc genhtml_legend=1 00:17:00.925 --rc geninfo_all_blocks=1 00:17:00.925 --rc geninfo_unexecuted_blocks=1 00:17:00.925 00:17:00.925 ' 00:17:00.925 11:44:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:00.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.925 --rc genhtml_branch_coverage=1 00:17:00.925 --rc genhtml_function_coverage=1 00:17:00.925 --rc genhtml_legend=1 00:17:00.925 --rc geninfo_all_blocks=1 00:17:00.925 --rc geninfo_unexecuted_blocks=1 00:17:00.925 00:17:00.925 ' 00:17:00.925 11:44:33 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.925 11:44:33 -- nvmf/common.sh@7 -- # uname -s 00:17:00.926 11:44:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.926 11:44:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.926 11:44:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.926 11:44:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.926 11:44:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.926 11:44:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.926 11:44:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.926 11:44:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.926 11:44:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.926 11:44:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.926 11:44:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:17:00.926 11:44:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:17:00.926 11:44:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.926 11:44:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.926 11:44:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.926 11:44:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.926 11:44:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.926 11:44:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.926 11:44:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.926 11:44:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.926 11:44:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.926 11:44:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.926 11:44:33 -- paths/export.sh@5 -- # export PATH 00:17:00.926 11:44:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.926 11:44:33 -- nvmf/common.sh@46 -- # : 0 00:17:00.926 11:44:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:00.926 11:44:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:00.926 11:44:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:00.926 11:44:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.926 11:44:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.926 11:44:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:00.926 11:44:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:00.926 11:44:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:00.926 11:44:33 -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:00.926 11:44:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:00.926 11:44:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.926 11:44:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:00.926 11:44:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:00.926 11:44:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:00.926 11:44:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.926 11:44:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.926 11:44:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.926 11:44:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:00.926 11:44:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:00.926 11:44:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:00.926 11:44:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:00.926 11:44:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:00.926 11:44:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:00.926 11:44:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.926 11:44:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.926 11:44:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:00.926 11:44:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:00.926 11:44:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.926 11:44:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.926 11:44:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.926 11:44:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.926 11:44:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.926 11:44:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.926 11:44:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.926 11:44:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.926 11:44:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:00.926 11:44:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:00.926 Cannot find device "nvmf_tgt_br" 00:17:00.926 11:44:33 -- nvmf/common.sh@154 -- # true 00:17:00.926 11:44:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.926 Cannot find device "nvmf_tgt_br2" 00:17:00.926 11:44:33 -- nvmf/common.sh@155 -- # true 00:17:00.926 11:44:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:00.926 11:44:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:00.926 Cannot find device "nvmf_tgt_br" 00:17:00.926 11:44:33 -- nvmf/common.sh@157 -- # true 00:17:00.926 11:44:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:00.926 Cannot find device "nvmf_tgt_br2" 00:17:00.926 11:44:33 -- nvmf/common.sh@158 -- # true 00:17:00.926 11:44:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:01.187 11:44:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:01.187 11:44:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.187 11:44:34 -- nvmf/common.sh@161 -- # true 00:17:01.187 11:44:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.187 11:44:34 -- nvmf/common.sh@162 -- # true 00:17:01.187 11:44:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.187 11:44:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.187 11:44:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.187 11:44:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.187 11:44:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.187 11:44:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.187 11:44:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.187 11:44:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.187 11:44:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:01.187 11:44:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:01.187 11:44:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:01.187 11:44:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:01.187 11:44:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:01.187 11:44:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.187 11:44:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.187 11:44:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.187 11:44:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:01.187 11:44:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:01.187 11:44:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.187 11:44:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.187 11:44:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.187 11:44:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.187 11:44:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.187 11:44:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:01.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:01.187 00:17:01.187 --- 10.0.0.2 ping statistics --- 00:17:01.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.187 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:01.187 11:44:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:01.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:01.187 00:17:01.187 --- 10.0.0.3 ping statistics --- 00:17:01.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.187 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:01.187 11:44:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:17:01.187 00:17:01.187 --- 10.0.0.1 ping statistics --- 00:17:01.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.187 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:01.187 11:44:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.187 11:44:34 -- nvmf/common.sh@421 -- # return 0 00:17:01.187 11:44:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:01.187 11:44:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.187 11:44:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:01.187 11:44:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:01.187 11:44:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.187 11:44:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:01.187 11:44:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:01.447 11:44:34 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:01.447 11:44:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:01.447 11:44:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.447 11:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.447 11:44:34 -- nvmf/common.sh@469 -- # nvmfpid=70116 00:17:01.447 11:44:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:01.447 11:44:34 -- nvmf/common.sh@470 -- # waitforlisten 70116 00:17:01.447 11:44:34 -- common/autotest_common.sh@829 -- # '[' -z 70116 ']' 00:17:01.447 11:44:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.447 11:44:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.447 11:44:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.447 11:44:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.447 11:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.448 [2024-11-20 11:44:34.296258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:01.448 [2024-11-20 11:44:34.296332] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.448 [2024-11-20 11:44:34.432583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.707 [2024-11-20 11:44:34.528336] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.707 [2024-11-20 11:44:34.528476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.707 [2024-11-20 11:44:34.528484] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.707 [2024-11-20 11:44:34.528490] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.707 [2024-11-20 11:44:34.528690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.707 [2024-11-20 11:44:34.528885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.707 [2024-11-20 11:44:34.528905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.277 11:44:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.277 11:44:35 -- common/autotest_common.sh@862 -- # return 0 00:17:02.277 11:44:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:02.277 11:44:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.277 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 11:44:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.277 11:44:35 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.277 11:44:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.277 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 [2024-11-20 11:44:35.228820] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.277 11:44:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.277 11:44:35 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:02.277 11:44:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.277 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 11:44:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.277 11:44:35 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.277 11:44:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.277 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 [2024-11-20 11:44:35.253602] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.277 11:44:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.277 11:44:35 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:02.277 11:44:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.277 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 NULL1 00:17:02.277 11:44:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.277 11:44:35 -- target/connect_stress.sh@21 -- # PERF_PID=70169 00:17:02.277 11:44:35 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:02.277 11:44:35 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:02.277 11:44:35 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # seq 1 20 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.277 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.277 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:02.537 11:44:35 -- target/connect_stress.sh@28 -- # cat 00:17:02.537 11:44:35 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:02.538 11:44:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.538 11:44:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.538 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.796 11:44:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.796 11:44:35 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:02.796 11:44:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.796 11:44:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.796 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:03.055 11:44:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.055 11:44:36 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:03.055 11:44:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.055 11:44:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.055 11:44:36 -- common/autotest_common.sh@10 -- # set +x 00:17:03.314 11:44:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.314 11:44:36 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:03.314 11:44:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.314 11:44:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.314 11:44:36 -- common/autotest_common.sh@10 -- # set +x 00:17:03.888 11:44:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.888 11:44:36 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:03.888 11:44:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.888 11:44:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.888 11:44:36 -- common/autotest_common.sh@10 -- # set +x 00:17:04.163 11:44:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.163 11:44:36 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:04.163 11:44:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.163 11:44:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.163 11:44:36 -- common/autotest_common.sh@10 -- # set +x 00:17:04.422 11:44:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.422 11:44:37 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:04.422 11:44:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.422 11:44:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.422 11:44:37 -- common/autotest_common.sh@10 -- # set +x 00:17:04.680 11:44:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.680 11:44:37 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:04.680 11:44:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.680 11:44:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.680 11:44:37 -- common/autotest_common.sh@10 -- # set +x 00:17:04.939 11:44:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.939 11:44:37 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:04.939 11:44:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.939 11:44:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.939 11:44:37 -- common/autotest_common.sh@10 -- # set +x 00:17:05.508 11:44:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.508 11:44:38 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:05.508 11:44:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.508 11:44:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.508 11:44:38 -- common/autotest_common.sh@10 -- # set +x 00:17:05.766 11:44:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.766 11:44:38 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:05.766 11:44:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.766 11:44:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.766 11:44:38 -- common/autotest_common.sh@10 -- # set +x 00:17:06.024 11:44:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.024 11:44:38 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:06.024 11:44:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.024 11:44:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.024 11:44:38 -- common/autotest_common.sh@10 -- # set +x 00:17:06.282 11:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.282 11:44:39 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:06.282 11:44:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.282 11:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.282 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.855 11:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.855 11:44:39 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:06.855 11:44:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.855 11:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.855 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:07.120 11:44:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.120 11:44:39 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:07.120 11:44:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.120 11:44:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.120 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:07.379 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.379 11:44:40 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:07.379 11:44:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.379 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.379 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.639 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.639 11:44:40 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:07.639 11:44:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.639 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.639 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.898 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.898 11:44:40 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:07.898 11:44:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.898 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.898 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:08.468 11:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.468 11:44:41 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:08.468 11:44:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.468 11:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.468 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.728 11:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.728 11:44:41 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:08.728 11:44:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.728 11:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.728 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.988 11:44:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.988 11:44:41 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:08.988 11:44:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.988 11:44:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.988 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:09.248 11:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.248 11:44:42 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:09.248 11:44:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.248 11:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.248 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:09.507 11:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.507 11:44:42 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:09.507 11:44:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.507 11:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.507 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:10.078 11:44:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.078 11:44:42 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:10.078 11:44:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.078 11:44:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.078 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:10.337 11:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.337 11:44:43 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:10.337 11:44:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.337 11:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.337 11:44:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.597 11:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.597 11:44:43 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:10.597 11:44:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.597 11:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.597 11:44:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.857 11:44:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.857 11:44:43 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:10.857 11:44:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.857 11:44:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.857 11:44:43 -- common/autotest_common.sh@10 -- # set +x 00:17:11.117 11:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.117 11:44:44 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:11.117 11:44:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.117 11:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.117 11:44:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.685 11:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.685 11:44:44 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:11.685 11:44:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.685 11:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.685 11:44:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.944 11:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.944 11:44:44 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:11.944 11:44:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.944 11:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.944 11:44:44 -- common/autotest_common.sh@10 -- # set +x 00:17:12.202 11:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.202 11:44:45 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:12.202 11:44:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.202 11:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.202 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:12.461 11:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.461 11:44:45 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:12.461 11:44:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.461 11:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.461 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:12.461 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.028 11:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.028 11:44:45 -- target/connect_stress.sh@34 -- # kill -0 70169 00:17:13.028 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70169) - No such process 00:17:13.028 11:44:45 -- target/connect_stress.sh@38 -- # wait 70169 00:17:13.028 11:44:45 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:13.028 11:44:45 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:13.028 11:44:45 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:13.028 11:44:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:13.028 11:44:45 -- nvmf/common.sh@116 -- # sync 00:17:13.028 11:44:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:13.028 11:44:45 -- nvmf/common.sh@119 -- # set +e 00:17:13.028 11:44:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:13.028 11:44:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:13.028 rmmod nvme_tcp 00:17:13.028 rmmod nvme_fabrics 00:17:13.028 rmmod nvme_keyring 00:17:13.028 11:44:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:13.028 11:44:45 -- nvmf/common.sh@123 -- # set -e 00:17:13.028 11:44:45 -- nvmf/common.sh@124 -- # return 0 00:17:13.028 11:44:45 -- nvmf/common.sh@477 -- # '[' -n 70116 ']' 00:17:13.028 11:44:45 -- nvmf/common.sh@478 -- # killprocess 70116 00:17:13.028 11:44:45 -- common/autotest_common.sh@936 -- # '[' -z 70116 ']' 00:17:13.028 11:44:45 -- common/autotest_common.sh@940 -- # kill -0 70116 00:17:13.028 11:44:45 -- common/autotest_common.sh@941 -- # uname 00:17:13.028 11:44:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.028 11:44:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70116 00:17:13.028 killing process with pid 70116 00:17:13.028 11:44:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.028 11:44:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.028 11:44:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70116' 00:17:13.028 11:44:45 -- common/autotest_common.sh@955 -- # kill 70116 00:17:13.028 11:44:45 -- common/autotest_common.sh@960 -- # wait 70116 00:17:13.287 11:44:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:13.287 11:44:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:13.287 11:44:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:13.287 11:44:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.287 11:44:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:13.287 11:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.287 11:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.287 11:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.287 11:44:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:13.287 00:17:13.287 real 0m12.655s 00:17:13.287 user 0m42.233s 00:17:13.287 sys 0m2.871s 00:17:13.287 ************************************ 00:17:13.287 END TEST nvmf_connect_stress 00:17:13.287 ************************************ 00:17:13.287 11:44:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:13.287 11:44:46 -- common/autotest_common.sh@10 -- # set +x 00:17:13.287 11:44:46 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:13.287 11:44:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.287 11:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.287 11:44:46 -- common/autotest_common.sh@10 -- # set +x 00:17:13.287 ************************************ 00:17:13.287 START TEST nvmf_fused_ordering 00:17:13.287 ************************************ 00:17:13.287 11:44:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:13.547 * Looking for test storage... 00:17:13.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:13.547 11:44:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:13.547 11:44:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:13.547 11:44:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:13.547 11:44:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:13.547 11:44:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:13.547 11:44:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:13.547 11:44:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:13.547 11:44:46 -- scripts/common.sh@335 -- # IFS=.-: 00:17:13.547 11:44:46 -- scripts/common.sh@335 -- # read -ra ver1 00:17:13.547 11:44:46 -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.547 11:44:46 -- scripts/common.sh@336 -- # read -ra ver2 00:17:13.547 11:44:46 -- scripts/common.sh@337 -- # local 'op=<' 00:17:13.547 11:44:46 -- scripts/common.sh@339 -- # ver1_l=2 00:17:13.547 11:44:46 -- scripts/common.sh@340 -- # ver2_l=1 00:17:13.547 11:44:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:13.547 11:44:46 -- scripts/common.sh@343 -- # case "$op" in 00:17:13.547 11:44:46 -- scripts/common.sh@344 -- # : 1 00:17:13.547 11:44:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:13.547 11:44:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.547 11:44:46 -- scripts/common.sh@364 -- # decimal 1 00:17:13.547 11:44:46 -- scripts/common.sh@352 -- # local d=1 00:17:13.547 11:44:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.547 11:44:46 -- scripts/common.sh@354 -- # echo 1 00:17:13.547 11:44:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:13.547 11:44:46 -- scripts/common.sh@365 -- # decimal 2 00:17:13.547 11:44:46 -- scripts/common.sh@352 -- # local d=2 00:17:13.547 11:44:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.547 11:44:46 -- scripts/common.sh@354 -- # echo 2 00:17:13.547 11:44:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:13.547 11:44:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:13.547 11:44:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:13.547 11:44:46 -- scripts/common.sh@367 -- # return 0 00:17:13.547 11:44:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.547 11:44:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:13.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.547 --rc genhtml_branch_coverage=1 00:17:13.547 --rc genhtml_function_coverage=1 00:17:13.547 --rc genhtml_legend=1 00:17:13.547 --rc geninfo_all_blocks=1 00:17:13.547 --rc geninfo_unexecuted_blocks=1 00:17:13.547 00:17:13.547 ' 00:17:13.547 11:44:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:13.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.547 --rc genhtml_branch_coverage=1 00:17:13.547 --rc genhtml_function_coverage=1 00:17:13.547 --rc genhtml_legend=1 00:17:13.547 --rc geninfo_all_blocks=1 00:17:13.547 --rc geninfo_unexecuted_blocks=1 00:17:13.547 00:17:13.547 ' 00:17:13.547 11:44:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:13.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.547 --rc genhtml_branch_coverage=1 00:17:13.547 --rc genhtml_function_coverage=1 00:17:13.547 --rc genhtml_legend=1 00:17:13.547 --rc geninfo_all_blocks=1 00:17:13.547 --rc geninfo_unexecuted_blocks=1 00:17:13.547 00:17:13.547 ' 00:17:13.547 11:44:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:13.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.547 --rc genhtml_branch_coverage=1 00:17:13.547 --rc genhtml_function_coverage=1 00:17:13.547 --rc genhtml_legend=1 00:17:13.547 --rc geninfo_all_blocks=1 00:17:13.547 --rc geninfo_unexecuted_blocks=1 00:17:13.547 00:17:13.547 ' 00:17:13.547 11:44:46 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.547 11:44:46 -- nvmf/common.sh@7 -- # uname -s 00:17:13.547 11:44:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.547 11:44:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.547 11:44:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.547 11:44:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.547 11:44:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.547 11:44:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.547 11:44:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.547 11:44:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.547 11:44:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.547 11:44:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.547 11:44:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:17:13.547 11:44:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:17:13.547 11:44:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.547 11:44:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.547 11:44:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.547 11:44:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.547 11:44:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.547 11:44:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.547 11:44:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.547 11:44:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.547 11:44:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.547 11:44:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.547 11:44:46 -- paths/export.sh@5 -- # export PATH 00:17:13.548 11:44:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.548 11:44:46 -- nvmf/common.sh@46 -- # : 0 00:17:13.548 11:44:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:13.548 11:44:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:13.548 11:44:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:13.548 11:44:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.548 11:44:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.548 11:44:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:13.548 11:44:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:13.548 11:44:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:13.548 11:44:46 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:13.548 11:44:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:13.548 11:44:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.548 11:44:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:13.548 11:44:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:13.548 11:44:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:13.548 11:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.548 11:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.548 11:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.548 11:44:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:13.548 11:44:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:13.548 11:44:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:13.548 11:44:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:13.548 11:44:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:13.548 11:44:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:13.548 11:44:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.548 11:44:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.548 11:44:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.548 11:44:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:13.548 11:44:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.548 11:44:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.548 11:44:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.548 11:44:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.548 11:44:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.548 11:44:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.548 11:44:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.548 11:44:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.548 11:44:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:13.548 11:44:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:13.548 Cannot find device "nvmf_tgt_br" 00:17:13.548 11:44:46 -- nvmf/common.sh@154 -- # true 00:17:13.548 11:44:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.548 Cannot find device "nvmf_tgt_br2" 00:17:13.548 11:44:46 -- nvmf/common.sh@155 -- # true 00:17:13.548 11:44:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:13.548 11:44:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:13.808 Cannot find device "nvmf_tgt_br" 00:17:13.808 11:44:46 -- nvmf/common.sh@157 -- # true 00:17:13.808 11:44:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:13.808 Cannot find device "nvmf_tgt_br2" 00:17:13.808 11:44:46 -- nvmf/common.sh@158 -- # true 00:17:13.808 11:44:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:13.808 11:44:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:13.808 11:44:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.808 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.808 11:44:46 -- nvmf/common.sh@161 -- # true 00:17:13.808 11:44:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.808 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.808 11:44:46 -- nvmf/common.sh@162 -- # true 00:17:13.808 11:44:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.808 11:44:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.808 11:44:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.808 11:44:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.808 11:44:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.808 11:44:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.808 11:44:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.808 11:44:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:13.808 11:44:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:13.808 11:44:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:13.808 11:44:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:13.808 11:44:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:13.808 11:44:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:13.808 11:44:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.808 11:44:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.808 11:44:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.808 11:44:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:13.808 11:44:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:13.808 11:44:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.808 11:44:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.808 11:44:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.808 11:44:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.808 11:44:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.808 11:44:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:13.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:13.808 00:17:13.808 --- 10.0.0.2 ping statistics --- 00:17:13.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.808 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:13.808 11:44:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:13.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:17:13.808 00:17:13.808 --- 10.0.0.3 ping statistics --- 00:17:13.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.808 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:13.808 11:44:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:17:13.808 00:17:13.808 --- 10.0.0.1 ping statistics --- 00:17:13.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.808 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:13.808 11:44:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.808 11:44:46 -- nvmf/common.sh@421 -- # return 0 00:17:13.808 11:44:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:13.808 11:44:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.808 11:44:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:13.808 11:44:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:13.808 11:44:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.808 11:44:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:13.808 11:44:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:14.068 11:44:46 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:14.068 11:44:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.068 11:44:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.068 11:44:46 -- common/autotest_common.sh@10 -- # set +x 00:17:14.068 11:44:46 -- nvmf/common.sh@469 -- # nvmfpid=70506 00:17:14.068 11:44:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.068 11:44:46 -- nvmf/common.sh@470 -- # waitforlisten 70506 00:17:14.068 11:44:46 -- common/autotest_common.sh@829 -- # '[' -z 70506 ']' 00:17:14.068 11:44:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.068 11:44:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.068 11:44:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.068 11:44:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.068 11:44:46 -- common/autotest_common.sh@10 -- # set +x 00:17:14.068 [2024-11-20 11:44:46.909643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:14.068 [2024-11-20 11:44:46.909717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.068 [2024-11-20 11:44:47.047676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.328 [2024-11-20 11:44:47.134809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.328 [2024-11-20 11:44:47.134919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.328 [2024-11-20 11:44:47.134926] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.328 [2024-11-20 11:44:47.134931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.328 [2024-11-20 11:44:47.134954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.896 11:44:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.896 11:44:47 -- common/autotest_common.sh@862 -- # return 0 00:17:14.896 11:44:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:14.896 11:44:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 11:44:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.896 11:44:47 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.896 11:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 [2024-11-20 11:44:47.812644] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.896 11:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.896 11:44:47 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:14.896 11:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 11:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.896 11:44:47 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.896 11:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 [2024-11-20 11:44:47.836723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.896 11:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.896 11:44:47 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:14.896 11:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 NULL1 00:17:14.896 11:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.896 11:44:47 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:14.896 11:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 11:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.896 11:44:47 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:14.896 11:44:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.896 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 11:44:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.896 11:44:47 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:14.896 [2024-11-20 11:44:47.905612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:14.896 [2024-11-20 11:44:47.905678] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70556 ] 00:17:15.466 Attached to nqn.2016-06.io.spdk:cnode1 00:17:15.466 Namespace ID: 1 size: 1GB 00:17:15.466 fused_ordering(0) 00:17:15.466 fused_ordering(1) 00:17:15.466 fused_ordering(2) 00:17:15.466 fused_ordering(3) 00:17:15.466 fused_ordering(4) 00:17:15.466 fused_ordering(5) 00:17:15.466 fused_ordering(6) 00:17:15.466 fused_ordering(7) 00:17:15.466 fused_ordering(8) 00:17:15.466 fused_ordering(9) 00:17:15.466 fused_ordering(10) 00:17:15.466 fused_ordering(11) 00:17:15.466 fused_ordering(12) 00:17:15.466 fused_ordering(13) 00:17:15.466 fused_ordering(14) 00:17:15.466 fused_ordering(15) 00:17:15.466 fused_ordering(16) 00:17:15.466 fused_ordering(17) 00:17:15.466 fused_ordering(18) 00:17:15.466 fused_ordering(19) 00:17:15.466 fused_ordering(20) 00:17:15.466 fused_ordering(21) 00:17:15.466 fused_ordering(22) 00:17:15.466 fused_ordering(23) 00:17:15.466 fused_ordering(24) 00:17:15.466 fused_ordering(25) 00:17:15.466 fused_ordering(26) 00:17:15.466 fused_ordering(27) 00:17:15.466 fused_ordering(28) 00:17:15.466 fused_ordering(29) 00:17:15.466 fused_ordering(30) 00:17:15.466 fused_ordering(31) 00:17:15.466 fused_ordering(32) 00:17:15.466 fused_ordering(33) 00:17:15.466 fused_ordering(34) 00:17:15.466 fused_ordering(35) 00:17:15.466 fused_ordering(36) 00:17:15.466 fused_ordering(37) 00:17:15.466 fused_ordering(38) 00:17:15.466 fused_ordering(39) 00:17:15.466 fused_ordering(40) 00:17:15.466 fused_ordering(41) 00:17:15.466 fused_ordering(42) 00:17:15.466 fused_ordering(43) 00:17:15.466 fused_ordering(44) 00:17:15.466 fused_ordering(45) 00:17:15.466 fused_ordering(46) 00:17:15.466 fused_ordering(47) 00:17:15.466 fused_ordering(48) 00:17:15.466 fused_ordering(49) 00:17:15.466 fused_ordering(50) 00:17:15.466 fused_ordering(51) 00:17:15.466 fused_ordering(52) 00:17:15.466 fused_ordering(53) 00:17:15.466 fused_ordering(54) 00:17:15.466 fused_ordering(55) 00:17:15.466 fused_ordering(56) 00:17:15.466 fused_ordering(57) 00:17:15.466 fused_ordering(58) 00:17:15.466 fused_ordering(59) 00:17:15.466 fused_ordering(60) 00:17:15.466 fused_ordering(61) 00:17:15.466 fused_ordering(62) 00:17:15.466 fused_ordering(63) 00:17:15.466 fused_ordering(64) 00:17:15.466 fused_ordering(65) 00:17:15.466 fused_ordering(66) 00:17:15.466 fused_ordering(67) 00:17:15.466 fused_ordering(68) 00:17:15.466 fused_ordering(69) 00:17:15.466 fused_ordering(70) 00:17:15.466 fused_ordering(71) 00:17:15.466 fused_ordering(72) 00:17:15.466 fused_ordering(73) 00:17:15.466 fused_ordering(74) 00:17:15.466 fused_ordering(75) 00:17:15.466 fused_ordering(76) 00:17:15.466 fused_ordering(77) 00:17:15.466 fused_ordering(78) 00:17:15.466 fused_ordering(79) 00:17:15.466 fused_ordering(80) 00:17:15.466 fused_ordering(81) 00:17:15.466 fused_ordering(82) 00:17:15.466 fused_ordering(83) 00:17:15.466 fused_ordering(84) 00:17:15.466 fused_ordering(85) 00:17:15.466 fused_ordering(86) 00:17:15.466 fused_ordering(87) 00:17:15.466 fused_ordering(88) 00:17:15.466 fused_ordering(89) 00:17:15.466 fused_ordering(90) 00:17:15.466 fused_ordering(91) 00:17:15.466 fused_ordering(92) 00:17:15.466 fused_ordering(93) 00:17:15.466 fused_ordering(94) 00:17:15.466 fused_ordering(95) 00:17:15.466 fused_ordering(96) 00:17:15.466 fused_ordering(97) 00:17:15.466 fused_ordering(98) 00:17:15.466 fused_ordering(99) 00:17:15.466 fused_ordering(100) 00:17:15.466 fused_ordering(101) 00:17:15.466 fused_ordering(102) 00:17:15.466 fused_ordering(103) 00:17:15.466 fused_ordering(104) 00:17:15.466 fused_ordering(105) 00:17:15.466 fused_ordering(106) 00:17:15.466 fused_ordering(107) 00:17:15.466 fused_ordering(108) 00:17:15.466 fused_ordering(109) 00:17:15.466 fused_ordering(110) 00:17:15.466 fused_ordering(111) 00:17:15.466 fused_ordering(112) 00:17:15.466 fused_ordering(113) 00:17:15.466 fused_ordering(114) 00:17:15.466 fused_ordering(115) 00:17:15.466 fused_ordering(116) 00:17:15.466 fused_ordering(117) 00:17:15.466 fused_ordering(118) 00:17:15.466 fused_ordering(119) 00:17:15.466 fused_ordering(120) 00:17:15.466 fused_ordering(121) 00:17:15.466 fused_ordering(122) 00:17:15.466 fused_ordering(123) 00:17:15.466 fused_ordering(124) 00:17:15.466 fused_ordering(125) 00:17:15.466 fused_ordering(126) 00:17:15.466 fused_ordering(127) 00:17:15.466 fused_ordering(128) 00:17:15.466 fused_ordering(129) 00:17:15.466 fused_ordering(130) 00:17:15.466 fused_ordering(131) 00:17:15.466 fused_ordering(132) 00:17:15.466 fused_ordering(133) 00:17:15.466 fused_ordering(134) 00:17:15.466 fused_ordering(135) 00:17:15.466 fused_ordering(136) 00:17:15.466 fused_ordering(137) 00:17:15.466 fused_ordering(138) 00:17:15.466 fused_ordering(139) 00:17:15.466 fused_ordering(140) 00:17:15.466 fused_ordering(141) 00:17:15.466 fused_ordering(142) 00:17:15.466 fused_ordering(143) 00:17:15.466 fused_ordering(144) 00:17:15.466 fused_ordering(145) 00:17:15.466 fused_ordering(146) 00:17:15.466 fused_ordering(147) 00:17:15.466 fused_ordering(148) 00:17:15.466 fused_ordering(149) 00:17:15.466 fused_ordering(150) 00:17:15.466 fused_ordering(151) 00:17:15.466 fused_ordering(152) 00:17:15.466 fused_ordering(153) 00:17:15.466 fused_ordering(154) 00:17:15.466 fused_ordering(155) 00:17:15.466 fused_ordering(156) 00:17:15.466 fused_ordering(157) 00:17:15.466 fused_ordering(158) 00:17:15.466 fused_ordering(159) 00:17:15.466 fused_ordering(160) 00:17:15.466 fused_ordering(161) 00:17:15.466 fused_ordering(162) 00:17:15.466 fused_ordering(163) 00:17:15.466 fused_ordering(164) 00:17:15.466 fused_ordering(165) 00:17:15.467 fused_ordering(166) 00:17:15.467 fused_ordering(167) 00:17:15.467 fused_ordering(168) 00:17:15.467 fused_ordering(169) 00:17:15.467 fused_ordering(170) 00:17:15.467 fused_ordering(171) 00:17:15.467 fused_ordering(172) 00:17:15.467 fused_ordering(173) 00:17:15.467 fused_ordering(174) 00:17:15.467 fused_ordering(175) 00:17:15.467 fused_ordering(176) 00:17:15.467 fused_ordering(177) 00:17:15.467 fused_ordering(178) 00:17:15.467 fused_ordering(179) 00:17:15.467 fused_ordering(180) 00:17:15.467 fused_ordering(181) 00:17:15.467 fused_ordering(182) 00:17:15.467 fused_ordering(183) 00:17:15.467 fused_ordering(184) 00:17:15.467 fused_ordering(185) 00:17:15.467 fused_ordering(186) 00:17:15.467 fused_ordering(187) 00:17:15.467 fused_ordering(188) 00:17:15.467 fused_ordering(189) 00:17:15.467 fused_ordering(190) 00:17:15.467 fused_ordering(191) 00:17:15.467 fused_ordering(192) 00:17:15.467 fused_ordering(193) 00:17:15.467 fused_ordering(194) 00:17:15.467 fused_ordering(195) 00:17:15.467 fused_ordering(196) 00:17:15.467 fused_ordering(197) 00:17:15.467 fused_ordering(198) 00:17:15.467 fused_ordering(199) 00:17:15.467 fused_ordering(200) 00:17:15.467 fused_ordering(201) 00:17:15.467 fused_ordering(202) 00:17:15.467 fused_ordering(203) 00:17:15.467 fused_ordering(204) 00:17:15.467 fused_ordering(205) 00:17:15.467 fused_ordering(206) 00:17:15.467 fused_ordering(207) 00:17:15.467 fused_ordering(208) 00:17:15.467 fused_ordering(209) 00:17:15.467 fused_ordering(210) 00:17:15.467 fused_ordering(211) 00:17:15.467 fused_ordering(212) 00:17:15.467 fused_ordering(213) 00:17:15.467 fused_ordering(214) 00:17:15.467 fused_ordering(215) 00:17:15.467 fused_ordering(216) 00:17:15.467 fused_ordering(217) 00:17:15.467 fused_ordering(218) 00:17:15.467 fused_ordering(219) 00:17:15.467 fused_ordering(220) 00:17:15.467 fused_ordering(221) 00:17:15.467 fused_ordering(222) 00:17:15.467 fused_ordering(223) 00:17:15.467 fused_ordering(224) 00:17:15.467 fused_ordering(225) 00:17:15.467 fused_ordering(226) 00:17:15.467 fused_ordering(227) 00:17:15.467 fused_ordering(228) 00:17:15.467 fused_ordering(229) 00:17:15.467 fused_ordering(230) 00:17:15.467 fused_ordering(231) 00:17:15.467 fused_ordering(232) 00:17:15.467 fused_ordering(233) 00:17:15.467 fused_ordering(234) 00:17:15.467 fused_ordering(235) 00:17:15.467 fused_ordering(236) 00:17:15.467 fused_ordering(237) 00:17:15.467 fused_ordering(238) 00:17:15.467 fused_ordering(239) 00:17:15.467 fused_ordering(240) 00:17:15.467 fused_ordering(241) 00:17:15.467 fused_ordering(242) 00:17:15.467 fused_ordering(243) 00:17:15.467 fused_ordering(244) 00:17:15.467 fused_ordering(245) 00:17:15.467 fused_ordering(246) 00:17:15.467 fused_ordering(247) 00:17:15.467 fused_ordering(248) 00:17:15.467 fused_ordering(249) 00:17:15.467 fused_ordering(250) 00:17:15.467 fused_ordering(251) 00:17:15.467 fused_ordering(252) 00:17:15.467 fused_ordering(253) 00:17:15.467 fused_ordering(254) 00:17:15.467 fused_ordering(255) 00:17:15.467 fused_ordering(256) 00:17:15.467 fused_ordering(257) 00:17:15.467 fused_ordering(258) 00:17:15.467 fused_ordering(259) 00:17:15.467 fused_ordering(260) 00:17:15.467 fused_ordering(261) 00:17:15.467 fused_ordering(262) 00:17:15.467 fused_ordering(263) 00:17:15.467 fused_ordering(264) 00:17:15.467 fused_ordering(265) 00:17:15.467 fused_ordering(266) 00:17:15.467 fused_ordering(267) 00:17:15.467 fused_ordering(268) 00:17:15.467 fused_ordering(269) 00:17:15.467 fused_ordering(270) 00:17:15.467 fused_ordering(271) 00:17:15.467 fused_ordering(272) 00:17:15.467 fused_ordering(273) 00:17:15.467 fused_ordering(274) 00:17:15.467 fused_ordering(275) 00:17:15.467 fused_ordering(276) 00:17:15.467 fused_ordering(277) 00:17:15.467 fused_ordering(278) 00:17:15.467 fused_ordering(279) 00:17:15.467 fused_ordering(280) 00:17:15.467 fused_ordering(281) 00:17:15.467 fused_ordering(282) 00:17:15.467 fused_ordering(283) 00:17:15.467 fused_ordering(284) 00:17:15.467 fused_ordering(285) 00:17:15.467 fused_ordering(286) 00:17:15.467 fused_ordering(287) 00:17:15.467 fused_ordering(288) 00:17:15.467 fused_ordering(289) 00:17:15.467 fused_ordering(290) 00:17:15.467 fused_ordering(291) 00:17:15.467 fused_ordering(292) 00:17:15.467 fused_ordering(293) 00:17:15.467 fused_ordering(294) 00:17:15.467 fused_ordering(295) 00:17:15.467 fused_ordering(296) 00:17:15.467 fused_ordering(297) 00:17:15.467 fused_ordering(298) 00:17:15.467 fused_ordering(299) 00:17:15.467 fused_ordering(300) 00:17:15.467 fused_ordering(301) 00:17:15.467 fused_ordering(302) 00:17:15.467 fused_ordering(303) 00:17:15.467 fused_ordering(304) 00:17:15.467 fused_ordering(305) 00:17:15.467 fused_ordering(306) 00:17:15.467 fused_ordering(307) 00:17:15.467 fused_ordering(308) 00:17:15.467 fused_ordering(309) 00:17:15.467 fused_ordering(310) 00:17:15.467 fused_ordering(311) 00:17:15.467 fused_ordering(312) 00:17:15.467 fused_ordering(313) 00:17:15.467 fused_ordering(314) 00:17:15.467 fused_ordering(315) 00:17:15.467 fused_ordering(316) 00:17:15.467 fused_ordering(317) 00:17:15.467 fused_ordering(318) 00:17:15.467 fused_ordering(319) 00:17:15.467 fused_ordering(320) 00:17:15.467 fused_ordering(321) 00:17:15.467 fused_ordering(322) 00:17:15.467 fused_ordering(323) 00:17:15.467 fused_ordering(324) 00:17:15.467 fused_ordering(325) 00:17:15.467 fused_ordering(326) 00:17:15.467 fused_ordering(327) 00:17:15.467 fused_ordering(328) 00:17:15.467 fused_ordering(329) 00:17:15.467 fused_ordering(330) 00:17:15.467 fused_ordering(331) 00:17:15.467 fused_ordering(332) 00:17:15.467 fused_ordering(333) 00:17:15.467 fused_ordering(334) 00:17:15.467 fused_ordering(335) 00:17:15.467 fused_ordering(336) 00:17:15.467 fused_ordering(337) 00:17:15.467 fused_ordering(338) 00:17:15.467 fused_ordering(339) 00:17:15.467 fused_ordering(340) 00:17:15.467 fused_ordering(341) 00:17:15.467 fused_ordering(342) 00:17:15.467 fused_ordering(343) 00:17:15.467 fused_ordering(344) 00:17:15.467 fused_ordering(345) 00:17:15.467 fused_ordering(346) 00:17:15.467 fused_ordering(347) 00:17:15.467 fused_ordering(348) 00:17:15.467 fused_ordering(349) 00:17:15.467 fused_ordering(350) 00:17:15.467 fused_ordering(351) 00:17:15.467 fused_ordering(352) 00:17:15.467 fused_ordering(353) 00:17:15.467 fused_ordering(354) 00:17:15.467 fused_ordering(355) 00:17:15.467 fused_ordering(356) 00:17:15.467 fused_ordering(357) 00:17:15.467 fused_ordering(358) 00:17:15.467 fused_ordering(359) 00:17:15.467 fused_ordering(360) 00:17:15.467 fused_ordering(361) 00:17:15.467 fused_ordering(362) 00:17:15.467 fused_ordering(363) 00:17:15.467 fused_ordering(364) 00:17:15.467 fused_ordering(365) 00:17:15.467 fused_ordering(366) 00:17:15.467 fused_ordering(367) 00:17:15.467 fused_ordering(368) 00:17:15.467 fused_ordering(369) 00:17:15.467 fused_ordering(370) 00:17:15.467 fused_ordering(371) 00:17:15.467 fused_ordering(372) 00:17:15.467 fused_ordering(373) 00:17:15.467 fused_ordering(374) 00:17:15.467 fused_ordering(375) 00:17:15.467 fused_ordering(376) 00:17:15.467 fused_ordering(377) 00:17:15.467 fused_ordering(378) 00:17:15.467 fused_ordering(379) 00:17:15.467 fused_ordering(380) 00:17:15.467 fused_ordering(381) 00:17:15.467 fused_ordering(382) 00:17:15.467 fused_ordering(383) 00:17:15.467 fused_ordering(384) 00:17:15.467 fused_ordering(385) 00:17:15.467 fused_ordering(386) 00:17:15.467 fused_ordering(387) 00:17:15.467 fused_ordering(388) 00:17:15.468 fused_ordering(389) 00:17:15.468 fused_ordering(390) 00:17:15.468 fused_ordering(391) 00:17:15.468 fused_ordering(392) 00:17:15.468 fused_ordering(393) 00:17:15.468 fused_ordering(394) 00:17:15.468 fused_ordering(395) 00:17:15.468 fused_ordering(396) 00:17:15.468 fused_ordering(397) 00:17:15.468 fused_ordering(398) 00:17:15.468 fused_ordering(399) 00:17:15.468 fused_ordering(400) 00:17:15.468 fused_ordering(401) 00:17:15.468 fused_ordering(402) 00:17:15.468 fused_ordering(403) 00:17:15.468 fused_ordering(404) 00:17:15.468 fused_ordering(405) 00:17:15.468 fused_ordering(406) 00:17:15.468 fused_ordering(407) 00:17:15.468 fused_ordering(408) 00:17:15.468 fused_ordering(409) 00:17:15.468 fused_ordering(410) 00:17:15.727 fused_ordering(411) 00:17:15.727 fused_ordering(412) 00:17:15.727 fused_ordering(413) 00:17:15.727 fused_ordering(414) 00:17:15.727 fused_ordering(415) 00:17:15.727 fused_ordering(416) 00:17:15.727 fused_ordering(417) 00:17:15.727 fused_ordering(418) 00:17:15.727 fused_ordering(419) 00:17:15.727 fused_ordering(420) 00:17:15.727 fused_ordering(421) 00:17:15.727 fused_ordering(422) 00:17:15.727 fused_ordering(423) 00:17:15.727 fused_ordering(424) 00:17:15.727 fused_ordering(425) 00:17:15.727 fused_ordering(426) 00:17:15.727 fused_ordering(427) 00:17:15.727 fused_ordering(428) 00:17:15.727 fused_ordering(429) 00:17:15.727 fused_ordering(430) 00:17:15.727 fused_ordering(431) 00:17:15.727 fused_ordering(432) 00:17:15.727 fused_ordering(433) 00:17:15.727 fused_ordering(434) 00:17:15.727 fused_ordering(435) 00:17:15.727 fused_ordering(436) 00:17:15.727 fused_ordering(437) 00:17:15.727 fused_ordering(438) 00:17:15.727 fused_ordering(439) 00:17:15.727 fused_ordering(440) 00:17:15.727 fused_ordering(441) 00:17:15.727 fused_ordering(442) 00:17:15.727 fused_ordering(443) 00:17:15.727 fused_ordering(444) 00:17:15.727 fused_ordering(445) 00:17:15.727 fused_ordering(446) 00:17:15.727 fused_ordering(447) 00:17:15.727 fused_ordering(448) 00:17:15.727 fused_ordering(449) 00:17:15.727 fused_ordering(450) 00:17:15.727 fused_ordering(451) 00:17:15.727 fused_ordering(452) 00:17:15.727 fused_ordering(453) 00:17:15.727 fused_ordering(454) 00:17:15.727 fused_ordering(455) 00:17:15.727 fused_ordering(456) 00:17:15.727 fused_ordering(457) 00:17:15.727 fused_ordering(458) 00:17:15.727 fused_ordering(459) 00:17:15.727 fused_ordering(460) 00:17:15.727 fused_ordering(461) 00:17:15.727 fused_ordering(462) 00:17:15.727 fused_ordering(463) 00:17:15.727 fused_ordering(464) 00:17:15.727 fused_ordering(465) 00:17:15.727 fused_ordering(466) 00:17:15.727 fused_ordering(467) 00:17:15.728 fused_ordering(468) 00:17:15.728 fused_ordering(469) 00:17:15.728 fused_ordering(470) 00:17:15.728 fused_ordering(471) 00:17:15.728 fused_ordering(472) 00:17:15.728 fused_ordering(473) 00:17:15.728 fused_ordering(474) 00:17:15.728 fused_ordering(475) 00:17:15.728 fused_ordering(476) 00:17:15.728 fused_ordering(477) 00:17:15.728 fused_ordering(478) 00:17:15.728 fused_ordering(479) 00:17:15.728 fused_ordering(480) 00:17:15.728 fused_ordering(481) 00:17:15.728 fused_ordering(482) 00:17:15.728 fused_ordering(483) 00:17:15.728 fused_ordering(484) 00:17:15.728 fused_ordering(485) 00:17:15.728 fused_ordering(486) 00:17:15.728 fused_ordering(487) 00:17:15.728 fused_ordering(488) 00:17:15.728 fused_ordering(489) 00:17:15.728 fused_ordering(490) 00:17:15.728 fused_ordering(491) 00:17:15.728 fused_ordering(492) 00:17:15.728 fused_ordering(493) 00:17:15.728 fused_ordering(494) 00:17:15.728 fused_ordering(495) 00:17:15.728 fused_ordering(496) 00:17:15.728 fused_ordering(497) 00:17:15.728 fused_ordering(498) 00:17:15.728 fused_ordering(499) 00:17:15.728 fused_ordering(500) 00:17:15.728 fused_ordering(501) 00:17:15.728 fused_ordering(502) 00:17:15.728 fused_ordering(503) 00:17:15.728 fused_ordering(504) 00:17:15.728 fused_ordering(505) 00:17:15.728 fused_ordering(506) 00:17:15.728 fused_ordering(507) 00:17:15.728 fused_ordering(508) 00:17:15.728 fused_ordering(509) 00:17:15.728 fused_ordering(510) 00:17:15.728 fused_ordering(511) 00:17:15.728 fused_ordering(512) 00:17:15.728 fused_ordering(513) 00:17:15.728 fused_ordering(514) 00:17:15.728 fused_ordering(515) 00:17:15.728 fused_ordering(516) 00:17:15.728 fused_ordering(517) 00:17:15.728 fused_ordering(518) 00:17:15.728 fused_ordering(519) 00:17:15.728 fused_ordering(520) 00:17:15.728 fused_ordering(521) 00:17:15.728 fused_ordering(522) 00:17:15.728 fused_ordering(523) 00:17:15.728 fused_ordering(524) 00:17:15.728 fused_ordering(525) 00:17:15.728 fused_ordering(526) 00:17:15.728 fused_ordering(527) 00:17:15.728 fused_ordering(528) 00:17:15.728 fused_ordering(529) 00:17:15.728 fused_ordering(530) 00:17:15.728 fused_ordering(531) 00:17:15.728 fused_ordering(532) 00:17:15.728 fused_ordering(533) 00:17:15.728 fused_ordering(534) 00:17:15.728 fused_ordering(535) 00:17:15.728 fused_ordering(536) 00:17:15.728 fused_ordering(537) 00:17:15.728 fused_ordering(538) 00:17:15.728 fused_ordering(539) 00:17:15.728 fused_ordering(540) 00:17:15.728 fused_ordering(541) 00:17:15.728 fused_ordering(542) 00:17:15.728 fused_ordering(543) 00:17:15.728 fused_ordering(544) 00:17:15.728 fused_ordering(545) 00:17:15.728 fused_ordering(546) 00:17:15.728 fused_ordering(547) 00:17:15.728 fused_ordering(548) 00:17:15.728 fused_ordering(549) 00:17:15.728 fused_ordering(550) 00:17:15.728 fused_ordering(551) 00:17:15.728 fused_ordering(552) 00:17:15.728 fused_ordering(553) 00:17:15.728 fused_ordering(554) 00:17:15.728 fused_ordering(555) 00:17:15.728 fused_ordering(556) 00:17:15.728 fused_ordering(557) 00:17:15.728 fused_ordering(558) 00:17:15.728 fused_ordering(559) 00:17:15.728 fused_ordering(560) 00:17:15.728 fused_ordering(561) 00:17:15.728 fused_ordering(562) 00:17:15.728 fused_ordering(563) 00:17:15.728 fused_ordering(564) 00:17:15.728 fused_ordering(565) 00:17:15.728 fused_ordering(566) 00:17:15.728 fused_ordering(567) 00:17:15.728 fused_ordering(568) 00:17:15.728 fused_ordering(569) 00:17:15.728 fused_ordering(570) 00:17:15.728 fused_ordering(571) 00:17:15.728 fused_ordering(572) 00:17:15.728 fused_ordering(573) 00:17:15.728 fused_ordering(574) 00:17:15.728 fused_ordering(575) 00:17:15.728 fused_ordering(576) 00:17:15.728 fused_ordering(577) 00:17:15.728 fused_ordering(578) 00:17:15.728 fused_ordering(579) 00:17:15.728 fused_ordering(580) 00:17:15.728 fused_ordering(581) 00:17:15.728 fused_ordering(582) 00:17:15.728 fused_ordering(583) 00:17:15.728 fused_ordering(584) 00:17:15.728 fused_ordering(585) 00:17:15.728 fused_ordering(586) 00:17:15.728 fused_ordering(587) 00:17:15.728 fused_ordering(588) 00:17:15.728 fused_ordering(589) 00:17:15.728 fused_ordering(590) 00:17:15.728 fused_ordering(591) 00:17:15.728 fused_ordering(592) 00:17:15.728 fused_ordering(593) 00:17:15.728 fused_ordering(594) 00:17:15.728 fused_ordering(595) 00:17:15.728 fused_ordering(596) 00:17:15.728 fused_ordering(597) 00:17:15.728 fused_ordering(598) 00:17:15.728 fused_ordering(599) 00:17:15.728 fused_ordering(600) 00:17:15.728 fused_ordering(601) 00:17:15.728 fused_ordering(602) 00:17:15.728 fused_ordering(603) 00:17:15.728 fused_ordering(604) 00:17:15.728 fused_ordering(605) 00:17:15.728 fused_ordering(606) 00:17:15.728 fused_ordering(607) 00:17:15.728 fused_ordering(608) 00:17:15.728 fused_ordering(609) 00:17:15.728 fused_ordering(610) 00:17:15.728 fused_ordering(611) 00:17:15.728 fused_ordering(612) 00:17:15.728 fused_ordering(613) 00:17:15.728 fused_ordering(614) 00:17:15.728 fused_ordering(615) 00:17:15.989 fused_ordering(616) 00:17:15.989 fused_ordering(617) 00:17:15.989 fused_ordering(618) 00:17:15.989 fused_ordering(619) 00:17:15.989 fused_ordering(620) 00:17:15.989 fused_ordering(621) 00:17:15.989 fused_ordering(622) 00:17:15.989 fused_ordering(623) 00:17:15.989 fused_ordering(624) 00:17:15.989 fused_ordering(625) 00:17:15.989 fused_ordering(626) 00:17:15.989 fused_ordering(627) 00:17:15.989 fused_ordering(628) 00:17:15.989 fused_ordering(629) 00:17:15.989 fused_ordering(630) 00:17:15.989 fused_ordering(631) 00:17:15.989 fused_ordering(632) 00:17:15.989 fused_ordering(633) 00:17:15.989 fused_ordering(634) 00:17:15.989 fused_ordering(635) 00:17:15.989 fused_ordering(636) 00:17:15.989 fused_ordering(637) 00:17:15.989 fused_ordering(638) 00:17:15.989 fused_ordering(639) 00:17:15.989 fused_ordering(640) 00:17:15.989 fused_ordering(641) 00:17:15.989 fused_ordering(642) 00:17:15.989 fused_ordering(643) 00:17:15.989 fused_ordering(644) 00:17:15.989 fused_ordering(645) 00:17:15.989 fused_ordering(646) 00:17:15.989 fused_ordering(647) 00:17:15.989 fused_ordering(648) 00:17:15.989 fused_ordering(649) 00:17:15.989 fused_ordering(650) 00:17:15.989 fused_ordering(651) 00:17:15.989 fused_ordering(652) 00:17:15.989 fused_ordering(653) 00:17:15.989 fused_ordering(654) 00:17:15.989 fused_ordering(655) 00:17:15.989 fused_ordering(656) 00:17:15.989 fused_ordering(657) 00:17:15.989 fused_ordering(658) 00:17:15.989 fused_ordering(659) 00:17:15.989 fused_ordering(660) 00:17:15.989 fused_ordering(661) 00:17:15.989 fused_ordering(662) 00:17:15.989 fused_ordering(663) 00:17:15.989 fused_ordering(664) 00:17:15.989 fused_ordering(665) 00:17:15.989 fused_ordering(666) 00:17:15.989 fused_ordering(667) 00:17:15.989 fused_ordering(668) 00:17:15.989 fused_ordering(669) 00:17:15.989 fused_ordering(670) 00:17:15.989 fused_ordering(671) 00:17:15.989 fused_ordering(672) 00:17:15.989 fused_ordering(673) 00:17:15.989 fused_ordering(674) 00:17:15.989 fused_ordering(675) 00:17:15.989 fused_ordering(676) 00:17:15.989 fused_ordering(677) 00:17:15.989 fused_ordering(678) 00:17:15.989 fused_ordering(679) 00:17:15.989 fused_ordering(680) 00:17:15.989 fused_ordering(681) 00:17:15.989 fused_ordering(682) 00:17:15.989 fused_ordering(683) 00:17:15.989 fused_ordering(684) 00:17:15.989 fused_ordering(685) 00:17:15.989 fused_ordering(686) 00:17:15.989 fused_ordering(687) 00:17:15.989 fused_ordering(688) 00:17:15.989 fused_ordering(689) 00:17:15.989 fused_ordering(690) 00:17:15.989 fused_ordering(691) 00:17:15.989 fused_ordering(692) 00:17:15.989 fused_ordering(693) 00:17:15.989 fused_ordering(694) 00:17:15.989 fused_ordering(695) 00:17:15.989 fused_ordering(696) 00:17:15.989 fused_ordering(697) 00:17:15.989 fused_ordering(698) 00:17:15.989 fused_ordering(699) 00:17:15.989 fused_ordering(700) 00:17:15.989 fused_ordering(701) 00:17:15.989 fused_ordering(702) 00:17:15.989 fused_ordering(703) 00:17:15.989 fused_ordering(704) 00:17:15.989 fused_ordering(705) 00:17:15.989 fused_ordering(706) 00:17:15.989 fused_ordering(707) 00:17:15.989 fused_ordering(708) 00:17:15.989 fused_ordering(709) 00:17:15.989 fused_ordering(710) 00:17:15.989 fused_ordering(711) 00:17:15.989 fused_ordering(712) 00:17:15.989 fused_ordering(713) 00:17:15.989 fused_ordering(714) 00:17:15.989 fused_ordering(715) 00:17:15.989 fused_ordering(716) 00:17:15.989 fused_ordering(717) 00:17:15.989 fused_ordering(718) 00:17:15.989 fused_ordering(719) 00:17:15.989 fused_ordering(720) 00:17:15.989 fused_ordering(721) 00:17:15.989 fused_ordering(722) 00:17:15.989 fused_ordering(723) 00:17:15.989 fused_ordering(724) 00:17:15.989 fused_ordering(725) 00:17:15.989 fused_ordering(726) 00:17:15.989 fused_ordering(727) 00:17:15.989 fused_ordering(728) 00:17:15.989 fused_ordering(729) 00:17:15.989 fused_ordering(730) 00:17:15.989 fused_ordering(731) 00:17:15.989 fused_ordering(732) 00:17:15.990 fused_ordering(733) 00:17:15.990 fused_ordering(734) 00:17:15.990 fused_ordering(735) 00:17:15.990 fused_ordering(736) 00:17:15.990 fused_ordering(737) 00:17:15.990 fused_ordering(738) 00:17:15.990 fused_ordering(739) 00:17:15.990 fused_ordering(740) 00:17:15.990 fused_ordering(741) 00:17:15.990 fused_ordering(742) 00:17:15.990 fused_ordering(743) 00:17:15.990 fused_ordering(744) 00:17:15.990 fused_ordering(745) 00:17:15.990 fused_ordering(746) 00:17:15.990 fused_ordering(747) 00:17:15.990 fused_ordering(748) 00:17:15.990 fused_ordering(749) 00:17:15.990 fused_ordering(750) 00:17:15.990 fused_ordering(751) 00:17:15.990 fused_ordering(752) 00:17:15.990 fused_ordering(753) 00:17:15.990 fused_ordering(754) 00:17:15.990 fused_ordering(755) 00:17:15.990 fused_ordering(756) 00:17:15.990 fused_ordering(757) 00:17:15.990 fused_ordering(758) 00:17:15.990 fused_ordering(759) 00:17:15.990 fused_ordering(760) 00:17:15.990 fused_ordering(761) 00:17:15.990 fused_ordering(762) 00:17:15.990 fused_ordering(763) 00:17:15.990 fused_ordering(764) 00:17:15.990 fused_ordering(765) 00:17:15.990 fused_ordering(766) 00:17:15.990 fused_ordering(767) 00:17:15.990 fused_ordering(768) 00:17:15.990 fused_ordering(769) 00:17:15.990 fused_ordering(770) 00:17:15.990 fused_ordering(771) 00:17:15.990 fused_ordering(772) 00:17:15.990 fused_ordering(773) 00:17:15.990 fused_ordering(774) 00:17:15.990 fused_ordering(775) 00:17:15.990 fused_ordering(776) 00:17:15.990 fused_ordering(777) 00:17:15.990 fused_ordering(778) 00:17:15.990 fused_ordering(779) 00:17:15.990 fused_ordering(780) 00:17:15.990 fused_ordering(781) 00:17:15.990 fused_ordering(782) 00:17:15.990 fused_ordering(783) 00:17:15.990 fused_ordering(784) 00:17:15.990 fused_ordering(785) 00:17:15.990 fused_ordering(786) 00:17:15.990 fused_ordering(787) 00:17:15.990 fused_ordering(788) 00:17:15.990 fused_ordering(789) 00:17:15.990 fused_ordering(790) 00:17:15.990 fused_ordering(791) 00:17:15.990 fused_ordering(792) 00:17:15.990 fused_ordering(793) 00:17:15.990 fused_ordering(794) 00:17:15.990 fused_ordering(795) 00:17:15.990 fused_ordering(796) 00:17:15.990 fused_ordering(797) 00:17:15.990 fused_ordering(798) 00:17:15.990 fused_ordering(799) 00:17:15.990 fused_ordering(800) 00:17:15.990 fused_ordering(801) 00:17:15.990 fused_ordering(802) 00:17:15.990 fused_ordering(803) 00:17:15.990 fused_ordering(804) 00:17:15.990 fused_ordering(805) 00:17:15.990 fused_ordering(806) 00:17:15.990 fused_ordering(807) 00:17:15.990 fused_ordering(808) 00:17:15.990 fused_ordering(809) 00:17:15.990 fused_ordering(810) 00:17:15.990 fused_ordering(811) 00:17:15.990 fused_ordering(812) 00:17:15.990 fused_ordering(813) 00:17:15.990 fused_ordering(814) 00:17:15.990 fused_ordering(815) 00:17:15.990 fused_ordering(816) 00:17:15.990 fused_ordering(817) 00:17:15.990 fused_ordering(818) 00:17:15.990 fused_ordering(819) 00:17:15.990 fused_ordering(820) 00:17:16.561 fused_ordering(821) 00:17:16.561 fused_ordering(822) 00:17:16.561 fused_ordering(823) 00:17:16.561 fused_ordering(824) 00:17:16.561 fused_ordering(825) 00:17:16.561 fused_ordering(826) 00:17:16.561 fused_ordering(827) 00:17:16.561 fused_ordering(828) 00:17:16.561 fused_ordering(829) 00:17:16.561 fused_ordering(830) 00:17:16.561 fused_ordering(831) 00:17:16.561 fused_ordering(832) 00:17:16.561 fused_ordering(833) 00:17:16.561 fused_ordering(834) 00:17:16.561 fused_ordering(835) 00:17:16.561 fused_ordering(836) 00:17:16.561 fused_ordering(837) 00:17:16.561 fused_ordering(838) 00:17:16.561 fused_ordering(839) 00:17:16.561 fused_ordering(840) 00:17:16.561 fused_ordering(841) 00:17:16.561 fused_ordering(842) 00:17:16.561 fused_ordering(843) 00:17:16.561 fused_ordering(844) 00:17:16.561 fused_ordering(845) 00:17:16.561 fused_ordering(846) 00:17:16.561 fused_ordering(847) 00:17:16.561 fused_ordering(848) 00:17:16.561 fused_ordering(849) 00:17:16.561 fused_ordering(850) 00:17:16.561 fused_ordering(851) 00:17:16.561 fused_ordering(852) 00:17:16.561 fused_ordering(853) 00:17:16.561 fused_ordering(854) 00:17:16.561 fused_ordering(855) 00:17:16.561 fused_ordering(856) 00:17:16.561 fused_ordering(857) 00:17:16.561 fused_ordering(858) 00:17:16.561 fused_ordering(859) 00:17:16.561 fused_ordering(860) 00:17:16.561 fused_ordering(861) 00:17:16.561 fused_ordering(862) 00:17:16.561 fused_ordering(863) 00:17:16.561 fused_ordering(864) 00:17:16.561 fused_ordering(865) 00:17:16.561 fused_ordering(866) 00:17:16.561 fused_ordering(867) 00:17:16.561 fused_ordering(868) 00:17:16.561 fused_ordering(869) 00:17:16.561 fused_ordering(870) 00:17:16.561 fused_ordering(871) 00:17:16.561 fused_ordering(872) 00:17:16.561 fused_ordering(873) 00:17:16.561 fused_ordering(874) 00:17:16.561 fused_ordering(875) 00:17:16.561 fused_ordering(876) 00:17:16.561 fused_ordering(877) 00:17:16.561 fused_ordering(878) 00:17:16.561 fused_ordering(879) 00:17:16.561 fused_ordering(880) 00:17:16.561 fused_ordering(881) 00:17:16.561 fused_ordering(882) 00:17:16.561 fused_ordering(883) 00:17:16.561 fused_ordering(884) 00:17:16.561 fused_ordering(885) 00:17:16.561 fused_ordering(886) 00:17:16.561 fused_ordering(887) 00:17:16.561 fused_ordering(888) 00:17:16.561 fused_ordering(889) 00:17:16.561 fused_ordering(890) 00:17:16.561 fused_ordering(891) 00:17:16.561 fused_ordering(892) 00:17:16.561 fused_ordering(893) 00:17:16.561 fused_ordering(894) 00:17:16.561 fused_ordering(895) 00:17:16.561 fused_ordering(896) 00:17:16.561 fused_ordering(897) 00:17:16.561 fused_ordering(898) 00:17:16.561 fused_ordering(899) 00:17:16.561 fused_ordering(900) 00:17:16.561 fused_ordering(901) 00:17:16.561 fused_ordering(902) 00:17:16.561 fused_ordering(903) 00:17:16.561 fused_ordering(904) 00:17:16.561 fused_ordering(905) 00:17:16.561 fused_ordering(906) 00:17:16.561 fused_ordering(907) 00:17:16.561 fused_ordering(908) 00:17:16.561 fused_ordering(909) 00:17:16.561 fused_ordering(910) 00:17:16.561 fused_ordering(911) 00:17:16.561 fused_ordering(912) 00:17:16.561 fused_ordering(913) 00:17:16.561 fused_ordering(914) 00:17:16.561 fused_ordering(915) 00:17:16.561 fused_ordering(916) 00:17:16.561 fused_ordering(917) 00:17:16.561 fused_ordering(918) 00:17:16.561 fused_ordering(919) 00:17:16.561 fused_ordering(920) 00:17:16.561 fused_ordering(921) 00:17:16.561 fused_ordering(922) 00:17:16.561 fused_ordering(923) 00:17:16.561 fused_ordering(924) 00:17:16.561 fused_ordering(925) 00:17:16.561 fused_ordering(926) 00:17:16.561 fused_ordering(927) 00:17:16.561 fused_ordering(928) 00:17:16.561 fused_ordering(929) 00:17:16.561 fused_ordering(930) 00:17:16.561 fused_ordering(931) 00:17:16.561 fused_ordering(932) 00:17:16.561 fused_ordering(933) 00:17:16.561 fused_ordering(934) 00:17:16.561 fused_ordering(935) 00:17:16.561 fused_ordering(936) 00:17:16.561 fused_ordering(937) 00:17:16.561 fused_ordering(938) 00:17:16.561 fused_ordering(939) 00:17:16.561 fused_ordering(940) 00:17:16.561 fused_ordering(941) 00:17:16.561 fused_ordering(942) 00:17:16.561 fused_ordering(943) 00:17:16.561 fused_ordering(944) 00:17:16.561 fused_ordering(945) 00:17:16.561 fused_ordering(946) 00:17:16.561 fused_ordering(947) 00:17:16.561 fused_ordering(948) 00:17:16.561 fused_ordering(949) 00:17:16.561 fused_ordering(950) 00:17:16.561 fused_ordering(951) 00:17:16.561 fused_ordering(952) 00:17:16.561 fused_ordering(953) 00:17:16.561 fused_ordering(954) 00:17:16.561 fused_ordering(955) 00:17:16.561 fused_ordering(956) 00:17:16.561 fused_ordering(957) 00:17:16.561 fused_ordering(958) 00:17:16.561 fused_ordering(959) 00:17:16.561 fused_ordering(960) 00:17:16.561 fused_ordering(961) 00:17:16.561 fused_ordering(962) 00:17:16.561 fused_ordering(963) 00:17:16.561 fused_ordering(964) 00:17:16.561 fused_ordering(965) 00:17:16.561 fused_ordering(966) 00:17:16.561 fused_ordering(967) 00:17:16.561 fused_ordering(968) 00:17:16.561 fused_ordering(969) 00:17:16.561 fused_ordering(970) 00:17:16.561 fused_ordering(971) 00:17:16.561 fused_ordering(972) 00:17:16.561 fused_ordering(973) 00:17:16.561 fused_ordering(974) 00:17:16.561 fused_ordering(975) 00:17:16.561 fused_ordering(976) 00:17:16.561 fused_ordering(977) 00:17:16.561 fused_ordering(978) 00:17:16.561 fused_ordering(979) 00:17:16.561 fused_ordering(980) 00:17:16.561 fused_ordering(981) 00:17:16.561 fused_ordering(982) 00:17:16.561 fused_ordering(983) 00:17:16.561 fused_ordering(984) 00:17:16.561 fused_ordering(985) 00:17:16.561 fused_ordering(986) 00:17:16.561 fused_ordering(987) 00:17:16.561 fused_ordering(988) 00:17:16.561 fused_ordering(989) 00:17:16.561 fused_ordering(990) 00:17:16.561 fused_ordering(991) 00:17:16.561 fused_ordering(992) 00:17:16.561 fused_ordering(993) 00:17:16.561 fused_ordering(994) 00:17:16.561 fused_ordering(995) 00:17:16.561 fused_ordering(996) 00:17:16.561 fused_ordering(997) 00:17:16.561 fused_ordering(998) 00:17:16.561 fused_ordering(999) 00:17:16.561 fused_ordering(1000) 00:17:16.561 fused_ordering(1001) 00:17:16.561 fused_ordering(1002) 00:17:16.561 fused_ordering(1003) 00:17:16.561 fused_ordering(1004) 00:17:16.561 fused_ordering(1005) 00:17:16.561 fused_ordering(1006) 00:17:16.561 fused_ordering(1007) 00:17:16.561 fused_ordering(1008) 00:17:16.561 fused_ordering(1009) 00:17:16.561 fused_ordering(1010) 00:17:16.561 fused_ordering(1011) 00:17:16.561 fused_ordering(1012) 00:17:16.561 fused_ordering(1013) 00:17:16.561 fused_ordering(1014) 00:17:16.561 fused_ordering(1015) 00:17:16.561 fused_ordering(1016) 00:17:16.561 fused_ordering(1017) 00:17:16.561 fused_ordering(1018) 00:17:16.561 fused_ordering(1019) 00:17:16.561 fused_ordering(1020) 00:17:16.561 fused_ordering(1021) 00:17:16.561 fused_ordering(1022) 00:17:16.561 fused_ordering(1023) 00:17:16.561 11:44:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:16.561 11:44:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:16.561 11:44:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:16.561 11:44:49 -- nvmf/common.sh@116 -- # sync 00:17:16.561 11:44:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:16.561 11:44:49 -- nvmf/common.sh@119 -- # set +e 00:17:16.561 11:44:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:16.561 11:44:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:16.561 rmmod nvme_tcp 00:17:16.561 rmmod nvme_fabrics 00:17:16.561 rmmod nvme_keyring 00:17:16.561 11:44:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:16.561 11:44:49 -- nvmf/common.sh@123 -- # set -e 00:17:16.561 11:44:49 -- nvmf/common.sh@124 -- # return 0 00:17:16.561 11:44:49 -- nvmf/common.sh@477 -- # '[' -n 70506 ']' 00:17:16.561 11:44:49 -- nvmf/common.sh@478 -- # killprocess 70506 00:17:16.561 11:44:49 -- common/autotest_common.sh@936 -- # '[' -z 70506 ']' 00:17:16.561 11:44:49 -- common/autotest_common.sh@940 -- # kill -0 70506 00:17:16.561 11:44:49 -- common/autotest_common.sh@941 -- # uname 00:17:16.561 11:44:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.561 11:44:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70506 00:17:16.561 11:44:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:16.561 11:44:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:16.561 11:44:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70506' 00:17:16.561 killing process with pid 70506 00:17:16.561 11:44:49 -- common/autotest_common.sh@955 -- # kill 70506 00:17:16.561 11:44:49 -- common/autotest_common.sh@960 -- # wait 70506 00:17:16.822 11:44:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:16.822 11:44:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:16.822 11:44:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:16.822 11:44:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.822 11:44:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:16.822 11:44:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.822 11:44:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.822 11:44:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.822 11:44:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:16.822 ************************************ 00:17:16.822 END TEST nvmf_fused_ordering 00:17:16.822 ************************************ 00:17:16.822 00:17:16.822 real 0m3.520s 00:17:16.822 user 0m3.892s 00:17:16.822 sys 0m1.168s 00:17:16.822 11:44:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:16.822 11:44:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.822 11:44:49 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:16.822 11:44:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:16.822 11:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.822 11:44:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.822 ************************************ 00:17:16.822 START TEST nvmf_delete_subsystem 00:17:16.822 ************************************ 00:17:16.822 11:44:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:17.083 * Looking for test storage... 00:17:17.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:17.083 11:44:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:17.083 11:44:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:17.083 11:44:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:17.083 11:44:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:17.083 11:44:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:17.083 11:44:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:17.083 11:44:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:17.083 11:44:50 -- scripts/common.sh@335 -- # IFS=.-: 00:17:17.083 11:44:50 -- scripts/common.sh@335 -- # read -ra ver1 00:17:17.083 11:44:50 -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.083 11:44:50 -- scripts/common.sh@336 -- # read -ra ver2 00:17:17.083 11:44:50 -- scripts/common.sh@337 -- # local 'op=<' 00:17:17.083 11:44:50 -- scripts/common.sh@339 -- # ver1_l=2 00:17:17.083 11:44:50 -- scripts/common.sh@340 -- # ver2_l=1 00:17:17.083 11:44:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:17.083 11:44:50 -- scripts/common.sh@343 -- # case "$op" in 00:17:17.083 11:44:50 -- scripts/common.sh@344 -- # : 1 00:17:17.083 11:44:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:17.083 11:44:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.083 11:44:50 -- scripts/common.sh@364 -- # decimal 1 00:17:17.083 11:44:50 -- scripts/common.sh@352 -- # local d=1 00:17:17.083 11:44:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.083 11:44:50 -- scripts/common.sh@354 -- # echo 1 00:17:17.083 11:44:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:17.083 11:44:50 -- scripts/common.sh@365 -- # decimal 2 00:17:17.083 11:44:50 -- scripts/common.sh@352 -- # local d=2 00:17:17.083 11:44:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.083 11:44:50 -- scripts/common.sh@354 -- # echo 2 00:17:17.083 11:44:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:17.083 11:44:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:17.083 11:44:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:17.083 11:44:50 -- scripts/common.sh@367 -- # return 0 00:17:17.083 11:44:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.083 11:44:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:17.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.083 --rc genhtml_branch_coverage=1 00:17:17.083 --rc genhtml_function_coverage=1 00:17:17.083 --rc genhtml_legend=1 00:17:17.083 --rc geninfo_all_blocks=1 00:17:17.083 --rc geninfo_unexecuted_blocks=1 00:17:17.083 00:17:17.083 ' 00:17:17.083 11:44:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:17.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.083 --rc genhtml_branch_coverage=1 00:17:17.083 --rc genhtml_function_coverage=1 00:17:17.083 --rc genhtml_legend=1 00:17:17.083 --rc geninfo_all_blocks=1 00:17:17.083 --rc geninfo_unexecuted_blocks=1 00:17:17.083 00:17:17.083 ' 00:17:17.083 11:44:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:17.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.083 --rc genhtml_branch_coverage=1 00:17:17.083 --rc genhtml_function_coverage=1 00:17:17.083 --rc genhtml_legend=1 00:17:17.083 --rc geninfo_all_blocks=1 00:17:17.083 --rc geninfo_unexecuted_blocks=1 00:17:17.083 00:17:17.083 ' 00:17:17.083 11:44:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:17.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.083 --rc genhtml_branch_coverage=1 00:17:17.083 --rc genhtml_function_coverage=1 00:17:17.083 --rc genhtml_legend=1 00:17:17.083 --rc geninfo_all_blocks=1 00:17:17.083 --rc geninfo_unexecuted_blocks=1 00:17:17.083 00:17:17.083 ' 00:17:17.083 11:44:50 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.083 11:44:50 -- nvmf/common.sh@7 -- # uname -s 00:17:17.083 11:44:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.083 11:44:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.083 11:44:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.083 11:44:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.083 11:44:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.083 11:44:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.083 11:44:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.083 11:44:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.083 11:44:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.083 11:44:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.083 11:44:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:17:17.083 11:44:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:17:17.083 11:44:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.083 11:44:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.083 11:44:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.083 11:44:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.083 11:44:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.083 11:44:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.083 11:44:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.083 11:44:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.084 11:44:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.084 11:44:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.084 11:44:50 -- paths/export.sh@5 -- # export PATH 00:17:17.084 11:44:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.084 11:44:50 -- nvmf/common.sh@46 -- # : 0 00:17:17.084 11:44:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:17.084 11:44:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:17.084 11:44:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:17.084 11:44:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.084 11:44:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.084 11:44:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:17.084 11:44:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:17.084 11:44:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:17.084 11:44:50 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:17.084 11:44:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:17.084 11:44:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.084 11:44:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:17.084 11:44:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:17.084 11:44:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:17.084 11:44:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.084 11:44:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.084 11:44:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.084 11:44:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:17.084 11:44:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:17.084 11:44:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:17.084 11:44:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:17.084 11:44:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:17.084 11:44:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:17.084 11:44:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.084 11:44:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.084 11:44:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:17.084 11:44:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:17.084 11:44:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.084 11:44:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.084 11:44:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.084 11:44:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.344 11:44:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.344 11:44:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.344 11:44:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.344 11:44:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.344 11:44:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:17.344 11:44:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:17.344 Cannot find device "nvmf_tgt_br" 00:17:17.344 11:44:50 -- nvmf/common.sh@154 -- # true 00:17:17.344 11:44:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.344 Cannot find device "nvmf_tgt_br2" 00:17:17.344 11:44:50 -- nvmf/common.sh@155 -- # true 00:17:17.344 11:44:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:17.344 11:44:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:17.344 Cannot find device "nvmf_tgt_br" 00:17:17.344 11:44:50 -- nvmf/common.sh@157 -- # true 00:17:17.344 11:44:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:17.344 Cannot find device "nvmf_tgt_br2" 00:17:17.344 11:44:50 -- nvmf/common.sh@158 -- # true 00:17:17.344 11:44:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:17.344 11:44:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:17.344 11:44:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.344 11:44:50 -- nvmf/common.sh@161 -- # true 00:17:17.344 11:44:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.344 11:44:50 -- nvmf/common.sh@162 -- # true 00:17:17.344 11:44:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.344 11:44:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.344 11:44:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.344 11:44:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.344 11:44:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.344 11:44:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.344 11:44:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.344 11:44:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.344 11:44:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:17.344 11:44:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:17.344 11:44:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:17.605 11:44:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:17.605 11:44:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:17.605 11:44:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.605 11:44:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.605 11:44:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.605 11:44:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:17.605 11:44:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:17.605 11:44:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.605 11:44:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.605 11:44:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.605 11:44:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.605 11:44:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.605 11:44:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:17.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:17:17.605 00:17:17.605 --- 10.0.0.2 ping statistics --- 00:17:17.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.605 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:17.605 11:44:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:17.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:17:17.605 00:17:17.605 --- 10.0.0.3 ping statistics --- 00:17:17.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.605 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:17.605 11:44:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:17:17.605 00:17:17.605 --- 10.0.0.1 ping statistics --- 00:17:17.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.605 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:17.605 11:44:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.605 11:44:50 -- nvmf/common.sh@421 -- # return 0 00:17:17.605 11:44:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:17.605 11:44:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.605 11:44:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:17.605 11:44:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:17.605 11:44:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.605 11:44:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:17.605 11:44:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:17.605 11:44:50 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:17.605 11:44:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:17.605 11:44:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.605 11:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.605 11:44:50 -- nvmf/common.sh@469 -- # nvmfpid=70744 00:17:17.605 11:44:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:17.605 11:44:50 -- nvmf/common.sh@470 -- # waitforlisten 70744 00:17:17.605 11:44:50 -- common/autotest_common.sh@829 -- # '[' -z 70744 ']' 00:17:17.605 11:44:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.605 11:44:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.605 11:44:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.605 11:44:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.605 11:44:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.605 [2024-11-20 11:44:50.570325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.605 [2024-11-20 11:44:50.570395] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.866 [2024-11-20 11:44:50.707743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:17.866 [2024-11-20 11:44:50.807746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:17.866 [2024-11-20 11:44:50.807874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.866 [2024-11-20 11:44:50.807881] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.866 [2024-11-20 11:44:50.807886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.866 [2024-11-20 11:44:50.808010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.866 [2024-11-20 11:44:50.808010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.437 11:44:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.437 11:44:51 -- common/autotest_common.sh@862 -- # return 0 00:17:18.437 11:44:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:18.437 11:44:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.437 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 11:44:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.697 11:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.697 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 [2024-11-20 11:44:51.496616] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.697 11:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:18.697 11:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.697 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 11:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.697 11:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.697 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 [2024-11-20 11:44:51.520685] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.697 11:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:18.697 11:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.697 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 NULL1 00:17:18.697 11:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:18.697 11:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.697 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 Delay0 00:17:18.697 11:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.697 11:44:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.697 11:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.697 11:44:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@28 -- # perf_pid=70801 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:18.697 11:44:51 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:18.697 [2024-11-20 11:44:51.736782] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:20.621 11:44:53 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.621 11:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.621 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 [2024-11-20 11:44:53.764595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575950 is same with the state(5) to be set 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 [2024-11-20 11:44:53.765602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x574a80 is same with the state(5) to be set 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Write completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 starting I/O failed: -6 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.881 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 Write completed with error (sct=0, sc=8) 00:17:20.882 Read completed with error (sct=0, sc=8) 00:17:20.882 starting I/O failed: -6 00:17:20.882 starting I/O failed: -6 00:17:20.882 starting I/O failed: -6 00:17:20.882 starting I/O failed: -6 00:17:20.882 starting I/O failed: -6 00:17:21.820 [2024-11-20 11:44:54.749027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5765a0 is same with the state(5) to be set 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 [2024-11-20 11:44:54.763547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5747d0 is same with the state(5) to be set 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 [2024-11-20 11:44:54.764395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x574d30 is same with the state(5) to be set 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Write completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.820 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 [2024-11-20 11:44:54.765579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706400c600 is same with the state(5) to be set 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Write completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 Read completed with error (sct=0, sc=8) 00:17:21.821 [2024-11-20 11:44:54.765781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706400bf20 is same with the state(5) to be set 00:17:21.821 [2024-11-20 11:44:54.766423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5765a0 (9): Bad file descriptor 00:17:21.821 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:21.821 11:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.821 11:44:54 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:21.821 11:44:54 -- target/delete_subsystem.sh@35 -- # kill -0 70801 00:17:21.821 11:44:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:21.821 Initializing NVMe Controllers 00:17:21.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:21.821 Controller IO queue size 128, less than required. 00:17:21.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:21.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:21.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:21.821 Initialization complete. Launching workers. 00:17:21.821 ======================================================== 00:17:21.821 Latency(us) 00:17:21.821 Device Information : IOPS MiB/s Average min max 00:17:21.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.61 0.09 879914.14 1013.42 1008491.01 00:17:21.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.08 0.09 938474.00 323.69 2002548.37 00:17:21.821 ======================================================== 00:17:21.821 Total : 358.68 0.18 909640.78 323.69 2002548.37 00:17:21.821 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@35 -- # kill -0 70801 00:17:22.390 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70801) - No such process 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@45 -- # NOT wait 70801 00:17:22.390 11:44:55 -- common/autotest_common.sh@650 -- # local es=0 00:17:22.390 11:44:55 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 70801 00:17:22.390 11:44:55 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:22.390 11:44:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.390 11:44:55 -- common/autotest_common.sh@642 -- # type -t wait 00:17:22.390 11:44:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.390 11:44:55 -- common/autotest_common.sh@653 -- # wait 70801 00:17:22.390 11:44:55 -- common/autotest_common.sh@653 -- # es=1 00:17:22.390 11:44:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.390 11:44:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.390 11:44:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:22.390 11:44:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.390 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.390 11:44:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.390 11:44:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.390 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.390 [2024-11-20 11:44:55.303061] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.390 11:44:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.390 11:44:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.390 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.390 11:44:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@54 -- # perf_pid=70848 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:22.390 11:44:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:22.649 [2024-11-20 11:44:55.493257] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:22.908 11:44:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:22.908 11:44:55 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:22.908 11:44:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:23.478 11:44:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.478 11:44:56 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:23.478 11:44:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:24.047 11:44:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:24.047 11:44:56 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:24.047 11:44:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:24.307 11:44:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:24.307 11:44:57 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:24.307 11:44:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:24.877 11:44:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:24.877 11:44:57 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:24.877 11:44:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:25.446 11:44:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:25.446 11:44:58 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:25.446 11:44:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:25.706 Initializing NVMe Controllers 00:17:25.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.706 Controller IO queue size 128, less than required. 00:17:25.706 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:25.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:25.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:25.706 Initialization complete. Launching workers. 00:17:25.706 ======================================================== 00:17:25.706 Latency(us) 00:17:25.706 Device Information : IOPS MiB/s Average min max 00:17:25.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002397.19 1000095.30 1041109.75 00:17:25.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003830.61 1000121.06 1041808.66 00:17:25.706 ======================================================== 00:17:25.706 Total : 256.00 0.12 1003113.90 1000095.30 1041808.66 00:17:25.706 00:17:25.965 11:44:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:25.965 11:44:58 -- target/delete_subsystem.sh@57 -- # kill -0 70848 00:17:25.965 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70848) - No such process 00:17:25.965 11:44:58 -- target/delete_subsystem.sh@67 -- # wait 70848 00:17:25.965 11:44:58 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:25.965 11:44:58 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:25.965 11:44:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:25.965 11:44:58 -- nvmf/common.sh@116 -- # sync 00:17:25.965 11:44:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:25.965 11:44:58 -- nvmf/common.sh@119 -- # set +e 00:17:25.965 11:44:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:25.965 11:44:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:25.965 rmmod nvme_tcp 00:17:25.965 rmmod nvme_fabrics 00:17:25.965 rmmod nvme_keyring 00:17:25.965 11:44:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:25.965 11:44:58 -- nvmf/common.sh@123 -- # set -e 00:17:25.965 11:44:58 -- nvmf/common.sh@124 -- # return 0 00:17:25.965 11:44:58 -- nvmf/common.sh@477 -- # '[' -n 70744 ']' 00:17:25.965 11:44:58 -- nvmf/common.sh@478 -- # killprocess 70744 00:17:25.965 11:44:58 -- common/autotest_common.sh@936 -- # '[' -z 70744 ']' 00:17:25.965 11:44:58 -- common/autotest_common.sh@940 -- # kill -0 70744 00:17:25.965 11:44:58 -- common/autotest_common.sh@941 -- # uname 00:17:25.965 11:44:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:25.965 11:44:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70744 00:17:26.224 11:44:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:26.224 11:44:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:26.224 11:44:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70744' 00:17:26.224 killing process with pid 70744 00:17:26.225 11:44:59 -- common/autotest_common.sh@955 -- # kill 70744 00:17:26.225 11:44:59 -- common/autotest_common.sh@960 -- # wait 70744 00:17:26.225 11:44:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.225 11:44:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:26.225 11:44:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:26.225 11:44:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.225 11:44:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:26.225 11:44:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.225 11:44:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.225 11:44:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.484 11:44:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:26.484 00:17:26.484 real 0m9.452s 00:17:26.484 user 0m28.982s 00:17:26.484 sys 0m1.394s 00:17:26.484 11:44:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:26.484 11:44:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.484 ************************************ 00:17:26.484 END TEST nvmf_delete_subsystem 00:17:26.484 ************************************ 00:17:26.484 11:44:59 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:17:26.484 11:44:59 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:17:26.484 11:44:59 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:26.484 11:44:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:26.484 11:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.484 11:44:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.484 ************************************ 00:17:26.484 START TEST nvmf_vfio_user 00:17:26.484 ************************************ 00:17:26.484 11:44:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:26.484 * Looking for test storage... 00:17:26.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:26.484 11:44:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:26.484 11:44:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:26.484 11:44:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:26.744 11:44:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:26.744 11:44:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:26.744 11:44:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:26.744 11:44:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:26.744 11:44:59 -- scripts/common.sh@335 -- # IFS=.-: 00:17:26.744 11:44:59 -- scripts/common.sh@335 -- # read -ra ver1 00:17:26.744 11:44:59 -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.744 11:44:59 -- scripts/common.sh@336 -- # read -ra ver2 00:17:26.744 11:44:59 -- scripts/common.sh@337 -- # local 'op=<' 00:17:26.744 11:44:59 -- scripts/common.sh@339 -- # ver1_l=2 00:17:26.744 11:44:59 -- scripts/common.sh@340 -- # ver2_l=1 00:17:26.744 11:44:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:26.744 11:44:59 -- scripts/common.sh@343 -- # case "$op" in 00:17:26.744 11:44:59 -- scripts/common.sh@344 -- # : 1 00:17:26.744 11:44:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:26.744 11:44:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.744 11:44:59 -- scripts/common.sh@364 -- # decimal 1 00:17:26.744 11:44:59 -- scripts/common.sh@352 -- # local d=1 00:17:26.744 11:44:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.744 11:44:59 -- scripts/common.sh@354 -- # echo 1 00:17:26.744 11:44:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:26.744 11:44:59 -- scripts/common.sh@365 -- # decimal 2 00:17:26.744 11:44:59 -- scripts/common.sh@352 -- # local d=2 00:17:26.744 11:44:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.744 11:44:59 -- scripts/common.sh@354 -- # echo 2 00:17:26.744 11:44:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:26.744 11:44:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.744 11:44:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.744 11:44:59 -- scripts/common.sh@367 -- # return 0 00:17:26.744 11:44:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.744 11:44:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.744 --rc genhtml_branch_coverage=1 00:17:26.744 --rc genhtml_function_coverage=1 00:17:26.744 --rc genhtml_legend=1 00:17:26.744 --rc geninfo_all_blocks=1 00:17:26.744 --rc geninfo_unexecuted_blocks=1 00:17:26.744 00:17:26.744 ' 00:17:26.744 11:44:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.744 --rc genhtml_branch_coverage=1 00:17:26.744 --rc genhtml_function_coverage=1 00:17:26.744 --rc genhtml_legend=1 00:17:26.744 --rc geninfo_all_blocks=1 00:17:26.744 --rc geninfo_unexecuted_blocks=1 00:17:26.744 00:17:26.744 ' 00:17:26.744 11:44:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.744 --rc genhtml_branch_coverage=1 00:17:26.744 --rc genhtml_function_coverage=1 00:17:26.744 --rc genhtml_legend=1 00:17:26.744 --rc geninfo_all_blocks=1 00:17:26.744 --rc geninfo_unexecuted_blocks=1 00:17:26.744 00:17:26.744 ' 00:17:26.744 11:44:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.744 --rc genhtml_branch_coverage=1 00:17:26.744 --rc genhtml_function_coverage=1 00:17:26.744 --rc genhtml_legend=1 00:17:26.744 --rc geninfo_all_blocks=1 00:17:26.744 --rc geninfo_unexecuted_blocks=1 00:17:26.744 00:17:26.744 ' 00:17:26.744 11:44:59 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.744 11:44:59 -- nvmf/common.sh@7 -- # uname -s 00:17:26.744 11:44:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.744 11:44:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.744 11:44:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.744 11:44:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.744 11:44:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.744 11:44:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.744 11:44:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.744 11:44:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.744 11:44:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.744 11:44:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.744 11:44:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:17:26.744 11:44:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:17:26.744 11:44:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.744 11:44:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.744 11:44:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.744 11:44:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.744 11:44:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.744 11:44:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.744 11:44:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.744 11:44:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.744 11:44:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.744 11:44:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.744 11:44:59 -- paths/export.sh@5 -- # export PATH 00:17:26.745 11:44:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.745 11:44:59 -- nvmf/common.sh@46 -- # : 0 00:17:26.745 11:44:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.745 11:44:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.745 11:44:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.745 11:44:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.745 11:44:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.745 11:44:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.745 11:44:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.745 11:44:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70983 00:17:26.745 Process pid: 70983 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70983' 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:26.745 11:44:59 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70983 00:17:26.745 11:44:59 -- common/autotest_common.sh@829 -- # '[' -z 70983 ']' 00:17:26.745 11:44:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.745 11:44:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.745 11:44:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.745 11:44:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.745 11:44:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.745 [2024-11-20 11:44:59.677737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:26.745 [2024-11-20 11:44:59.677825] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.005 [2024-11-20 11:44:59.814710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.005 [2024-11-20 11:44:59.913856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.005 [2024-11-20 11:44:59.913996] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.005 [2024-11-20 11:44:59.914003] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.005 [2024-11-20 11:44:59.914008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.005 [2024-11-20 11:44:59.914268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.005 [2024-11-20 11:44:59.914561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.005 [2024-11-20 11:44:59.914464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.005 [2024-11-20 11:44:59.914565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.571 11:45:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.571 11:45:00 -- common/autotest_common.sh@862 -- # return 0 00:17:27.571 11:45:00 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:28.509 11:45:01 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:28.769 11:45:01 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:28.769 11:45:01 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:28.769 11:45:01 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.769 11:45:01 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:28.769 11:45:01 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:29.028 Malloc1 00:17:29.028 11:45:01 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:29.287 11:45:02 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:29.546 11:45:02 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:29.805 11:45:02 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:29.805 11:45:02 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:29.805 11:45:02 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:30.064 Malloc2 00:17:30.064 11:45:02 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:30.323 11:45:03 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:30.323 11:45:03 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:30.582 11:45:03 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:30.582 11:45:03 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:30.582 11:45:03 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:30.582 11:45:03 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:30.582 11:45:03 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:30.582 11:45:03 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:30.582 [2024-11-20 11:45:03.588956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.582 [2024-11-20 11:45:03.588993] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71116 ] 00:17:30.844 [2024-11-20 11:45:03.719458] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:30.844 [2024-11-20 11:45:03.721864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:30.844 [2024-11-20 11:45:03.721889] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0d5d6ed000 00:17:30.844 [2024-11-20 11:45:03.722857] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.723850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.724847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.725846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.726858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.727884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.729683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.729858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.844 [2024-11-20 11:45:03.730865] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:30.844 [2024-11-20 11:45:03.730883] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0d5ce46000 00:17:30.844 [2024-11-20 11:45:03.731923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:30.844 [2024-11-20 11:45:03.746095] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:30.844 [2024-11-20 11:45:03.746134] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:30.844 [2024-11-20 11:45:03.750928] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:30.844 [2024-11-20 11:45:03.750986] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:30.844 [2024-11-20 11:45:03.751077] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:30.844 [2024-11-20 11:45:03.751101] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:30.844 [2024-11-20 11:45:03.751107] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:30.844 [2024-11-20 11:45:03.751917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:30.844 [2024-11-20 11:45:03.751930] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:30.844 [2024-11-20 11:45:03.751937] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:30.844 [2024-11-20 11:45:03.752914] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:30.844 [2024-11-20 11:45:03.752923] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:30.844 [2024-11-20 11:45:03.752929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:30.844 [2024-11-20 11:45:03.753911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:30.844 [2024-11-20 11:45:03.753920] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:30.844 [2024-11-20 11:45:03.754921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:30.844 [2024-11-20 11:45:03.754933] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:30.844 [2024-11-20 11:45:03.754937] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:30.844 [2024-11-20 11:45:03.754943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:30.844 [2024-11-20 11:45:03.755048] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:30.844 [2024-11-20 11:45:03.755055] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:30.844 [2024-11-20 11:45:03.755059] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:30.844 [2024-11-20 11:45:03.755932] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:30.844 [2024-11-20 11:45:03.756927] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:30.844 [2024-11-20 11:45:03.757921] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:30.844 [2024-11-20 11:45:03.758961] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:30.844 [2024-11-20 11:45:03.759939] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:30.844 [2024-11-20 11:45:03.759951] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:30.844 [2024-11-20 11:45:03.759956] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:30.844 [2024-11-20 11:45:03.759975] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:30.844 [2024-11-20 11:45:03.759989] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:30.844 [2024-11-20 11:45:03.760013] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:30.844 [2024-11-20 11:45:03.760021] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.845 [2024-11-20 11:45:03.760035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760122] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:30.845 [2024-11-20 11:45:03.760126] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:30.845 [2024-11-20 11:45:03.760130] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:30.845 [2024-11-20 11:45:03.760133] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:30.845 [2024-11-20 11:45:03.760137] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:30.845 [2024-11-20 11:45:03.760141] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:30.845 [2024-11-20 11:45:03.760145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.845 [2024-11-20 11:45:03.760236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.845 [2024-11-20 11:45:03.760243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.845 [2024-11-20 11:45:03.760250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.845 [2024-11-20 11:45:03.760254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760322] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:30.845 [2024-11-20 11:45:03.760326] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760332] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760340] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760441] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760451] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760458] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:30.845 [2024-11-20 11:45:03.760462] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:30.845 [2024-11-20 11:45:03.760468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760530] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:30.845 [2024-11-20 11:45:03.760538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760545] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760551] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:30.845 [2024-11-20 11:45:03.760554] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.845 [2024-11-20 11:45:03.760560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760623] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760630] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760636] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:30.845 [2024-11-20 11:45:03.760639] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.845 [2024-11-20 11:45:03.760645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760719] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760740] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:30.845 [2024-11-20 11:45:03.760744] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:30.845 [2024-11-20 11:45:03.760748] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:30.845 [2024-11-20 11:45:03.760766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.760960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.760971] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:30.845 [2024-11-20 11:45:03.760974] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:30.845 [2024-11-20 11:45:03.760977] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:30.845 [2024-11-20 11:45:03.760979] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:30.845 [2024-11-20 11:45:03.760984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:30.845 [2024-11-20 11:45:03.760989] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:30.845 [2024-11-20 11:45:03.760992] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:30.845 [2024-11-20 11:45:03.760997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.761002] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:30.845 [2024-11-20 11:45:03.761005] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.845 [2024-11-20 11:45:03.761009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.761015] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:30.845 [2024-11-20 11:45:03.761018] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:30.845 [2024-11-20 11:45:03.761023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:30.845 [2024-11-20 11:45:03.761057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.761089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.761115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:30.845 [2024-11-20 11:45:03.761143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:30.845 ===================================================== 00:17:30.845 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:30.845 ===================================================== 00:17:30.845 Controller Capabilities/Features 00:17:30.845 ================================ 00:17:30.845 Vendor ID: 4e58 00:17:30.845 Subsystem Vendor ID: 4e58 00:17:30.845 Serial Number: SPDK1 00:17:30.846 Model Number: SPDK bdev Controller 00:17:30.846 Firmware Version: 24.01.1 00:17:30.846 Recommended Arb Burst: 6 00:17:30.846 IEEE OUI Identifier: 8d 6b 50 00:17:30.846 Multi-path I/O 00:17:30.846 May have multiple subsystem ports: Yes 00:17:30.846 May have multiple controllers: Yes 00:17:30.846 Associated with SR-IOV VF: No 00:17:30.846 Max Data Transfer Size: 131072 00:17:30.846 Max Number of Namespaces: 32 00:17:30.846 Max Number of I/O Queues: 127 00:17:30.846 NVMe Specification Version (VS): 1.3 00:17:30.846 NVMe Specification Version (Identify): 1.3 00:17:30.846 Maximum Queue Entries: 256 00:17:30.846 Contiguous Queues Required: Yes 00:17:30.846 Arbitration Mechanisms Supported 00:17:30.846 Weighted Round Robin: Not Supported 00:17:30.846 Vendor Specific: Not Supported 00:17:30.846 Reset Timeout: 15000 ms 00:17:30.846 Doorbell Stride: 4 bytes 00:17:30.846 NVM Subsystem Reset: Not Supported 00:17:30.846 Command Sets Supported 00:17:30.846 NVM Command Set: Supported 00:17:30.846 Boot Partition: Not Supported 00:17:30.846 Memory Page Size Minimum: 4096 bytes 00:17:30.846 Memory Page Size Maximum: 4096 bytes 00:17:30.846 Persistent Memory Region: Not Supported 00:17:30.846 Optional Asynchronous Events Supported 00:17:30.846 Namespace Attribute Notices: Supported 00:17:30.846 Firmware Activation Notices: Not Supported 00:17:30.846 ANA Change Notices: Not Supported 00:17:30.846 PLE Aggregate Log Change Notices: Not Supported 00:17:30.846 LBA Status Info Alert Notices: Not Supported 00:17:30.846 EGE Aggregate Log Change Notices: Not Supported 00:17:30.846 Normal NVM Subsystem Shutdown event: Not Supported 00:17:30.846 Zone Descriptor Change Notices: Not Supported 00:17:30.846 Discovery Log Change Notices: Not Supported 00:17:30.846 Controller Attributes 00:17:30.846 128-bit Host Identifier: Supported 00:17:30.846 Non-Operational Permissive Mode: Not Supported 00:17:30.846 NVM Sets: Not Supported 00:17:30.846 Read Recovery Levels: Not Supported 00:17:30.846 Endurance Groups: Not Supported 00:17:30.846 Predictable Latency Mode: Not Supported 00:17:30.846 Traffic Based Keep ALive: Not Supported 00:17:30.846 Namespace Granularity: Not Supported 00:17:30.846 SQ Associations: Not Supported 00:17:30.846 UUID List: Not Supported 00:17:30.846 Multi-Domain Subsystem: Not Supported 00:17:30.846 Fixed Capacity Management: Not Supported 00:17:30.846 Variable Capacity Management: Not Supported 00:17:30.846 Delete Endurance Group: Not Supported 00:17:30.846 Delete NVM Set: Not Supported 00:17:30.846 Extended LBA Formats Supported: Not Supported 00:17:30.846 Flexible Data Placement Supported: Not Supported 00:17:30.846 00:17:30.846 Controller Memory Buffer Support 00:17:30.846 ================================ 00:17:30.846 Supported: No 00:17:30.846 00:17:30.846 Persistent Memory Region Support 00:17:30.846 ================================ 00:17:30.846 Supported: No 00:17:30.846 00:17:30.846 Admin Command Set Attributes 00:17:30.846 ============================ 00:17:30.846 Security Send/Receive: Not Supported 00:17:30.846 Format NVM: Not Supported 00:17:30.846 Firmware Activate/Download: Not Supported 00:17:30.846 Namespace Management: Not Supported 00:17:30.846 Device Self-Test: Not Supported 00:17:30.846 Directives: Not Supported 00:17:30.846 NVMe-MI: Not Supported 00:17:30.846 Virtualization Management: Not Supported 00:17:30.846 Doorbell Buffer Config: Not Supported 00:17:30.846 Get LBA Status Capability: Not Supported 00:17:30.846 Command & Feature Lockdown Capability: Not Supported 00:17:30.846 Abort Command Limit: 4 00:17:30.846 Async Event Request Limit: 4 00:17:30.846 Number of Firmware Slots: N/A 00:17:30.846 Firmware Slot 1 Read-Only: N/A 00:17:30.846 Firmware Activation Without Reset: N/A 00:17:30.846 Multiple Update Detection Support: N/A 00:17:30.846 Firmware Update Granularity: No Information Provided 00:17:30.846 Per-Namespace SMART Log: No 00:17:30.846 Asymmetric Namespace Access Log Page: Not Supported 00:17:30.846 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:30.846 Command Effects Log Page: Supported 00:17:30.846 Get Log Page Extended Data: Supported 00:17:30.846 Telemetry Log Pages: Not Supported 00:17:30.846 Persistent Event Log Pages: Not Supported 00:17:30.846 Supported Log Pages Log Page: May Support 00:17:30.846 Commands Supported & Effects Log Page: Not Supported 00:17:30.846 Feature Identifiers & Effects Log Page:May Support 00:17:30.846 NVMe-MI Commands & Effects Log Page: May Support 00:17:30.846 Data Area 4 for Telemetry Log: Not Supported 00:17:30.846 Error Log Page Entries Supported: 128 00:17:30.846 Keep Alive: Supported 00:17:30.846 Keep Alive Granularity: 10000 ms 00:17:30.846 00:17:30.846 NVM Command Set Attributes 00:17:30.846 ========================== 00:17:30.846 Submission Queue Entry Size 00:17:30.846 Max: 64 00:17:30.846 Min: 64 00:17:30.846 Completion Queue Entry Size 00:17:30.846 Max: 16 00:17:30.846 Min: 16 00:17:30.846 Number of Namespaces: 32 00:17:30.846 Compare Command: Supported 00:17:30.846 Write Uncorrectable Command: Not Supported 00:17:30.846 Dataset Management Command: Supported 00:17:30.846 Write Zeroes Command: Supported 00:17:30.846 Set Features Save Field: Not Supported 00:17:30.846 Reservations: Not Supported 00:17:30.846 Timestamp: Not Supported 00:17:30.846 Copy: Supported 00:17:30.846 Volatile Write Cache: Present 00:17:30.846 Atomic Write Unit (Normal): 1 00:17:30.846 Atomic Write Unit (PFail): 1 00:17:30.846 Atomic Compare & Write Unit: 1 00:17:30.846 Fused Compare & Write: Supported 00:17:30.846 Scatter-Gather List 00:17:30.846 SGL Command Set: Supported (Dword aligned) 00:17:30.846 SGL Keyed: Not Supported 00:17:30.846 SGL Bit Bucket Descriptor: Not Supported 00:17:30.846 SGL Metadata Pointer: Not Supported 00:17:30.846 Oversized SGL: Not Supported 00:17:30.846 SGL Metadata Address: Not Supported 00:17:30.846 SGL Offset: Not Supported 00:17:30.846 Transport SGL Data Block: Not Supported 00:17:30.846 Replay Protected Memory Block: Not Supported 00:17:30.846 00:17:30.846 Firmware Slot Information 00:17:30.846 ========================= 00:17:30.846 Active slot: 1 00:17:30.846 Slot 1 Firmware Revision: 24.01.1 00:17:30.846 00:17:30.846 00:17:30.846 Commands Supported and Effects 00:17:30.846 ============================== 00:17:30.846 Admin Commands 00:17:30.846 -------------- 00:17:30.846 Get Log Page (02h): Supported 00:17:30.846 Identify (06h): Supported 00:17:30.846 Abort (08h): Supported 00:17:30.846 Set Features (09h): Supported 00:17:30.846 Get Features (0Ah): Supported 00:17:30.846 Asynchronous Event Request (0Ch): Supported 00:17:30.846 Keep Alive (18h): Supported 00:17:30.846 I/O Commands 00:17:30.846 ------------ 00:17:30.846 Flush (00h): Supported LBA-Change 00:17:30.846 Write (01h): Supported LBA-Change 00:17:30.846 Read (02h): Supported 00:17:30.846 Compare (05h): Supported 00:17:30.846 Write Zeroes (08h): Supported LBA-Change 00:17:30.846 Dataset Management (09h): Supported LBA-Change 00:17:30.846 Copy (19h): Supported LBA-Change 00:17:30.846 Unknown (79h): Supported LBA-Change 00:17:30.846 Unknown (7Ah): Supported 00:17:30.846 00:17:30.846 Error Log 00:17:30.846 ========= 00:17:30.846 00:17:30.846 Arbitration 00:17:30.846 =========== 00:17:30.846 Arbitration Burst: 1 00:17:30.846 00:17:30.846 Power Management 00:17:30.846 ================ 00:17:30.846 Number of Power States: 1 00:17:30.846 Current Power State: Power State #0 00:17:30.846 Power State #0: 00:17:30.846 Max Power: 0.00 W 00:17:30.846 Non-Operational State: Operational 00:17:30.846 Entry Latency: Not Reported 00:17:30.846 Exit Latency: Not Reported 00:17:30.846 Relative Read Throughput: 0 00:17:30.846 Relative Read Latency: 0 00:17:30.846 Relative Write Throughput: 0 00:17:30.846 Relative Write Latency: 0 00:17:30.846 Idle Power: Not Reported 00:17:30.846 Active Power: Not Reported 00:17:30.846 Non-Operational Permissive Mode: Not Supported 00:17:30.846 00:17:30.846 Health Information 00:17:30.846 ================== 00:17:30.846 Critical Warnings: 00:17:30.846 Available Spare Space: OK 00:17:30.846 Temperature: OK 00:17:30.846 Device Reliability: OK 00:17:30.846 Read Only: No 00:17:30.846 Volatile Memory Backup: OK 00:17:30.846 Current Temperature: 0 Kelvin[2024-11-20 11:45:03.761249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:30.846 [2024-11-20 11:45:03.761279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:30.846 [2024-11-20 11:45:03.761308] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:30.846 [2024-11-20 11:45:03.761318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.847 [2024-11-20 11:45:03.761322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.847 [2024-11-20 11:45:03.761327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.847 [2024-11-20 11:45:03.761331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.847 [2024-11-20 11:45:03.765667] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:30.847 [2024-11-20 11:45:03.765686] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:30.847 [2024-11-20 11:45:03.766001] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:30.847 [2024-11-20 11:45:03.766010] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:30.847 [2024-11-20 11:45:03.766958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:30.847 [2024-11-20 11:45:03.766973] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:30.847 [2024-11-20 11:45:03.767092] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:30.847 [2024-11-20 11:45:03.769009] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:30.847 (-273 Celsius) 00:17:30.847 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:30.847 Available Spare: 0% 00:17:30.847 Available Spare Threshold: 0% 00:17:30.847 Life Percentage Used: 0% 00:17:30.847 Data Units Read: 0 00:17:30.847 Data Units Written: 0 00:17:30.847 Host Read Commands: 0 00:17:30.847 Host Write Commands: 0 00:17:30.847 Controller Busy Time: 0 minutes 00:17:30.847 Power Cycles: 0 00:17:30.847 Power On Hours: 0 hours 00:17:30.847 Unsafe Shutdowns: 0 00:17:30.847 Unrecoverable Media Errors: 0 00:17:30.847 Lifetime Error Log Entries: 0 00:17:30.847 Warning Temperature Time: 0 minutes 00:17:30.847 Critical Temperature Time: 0 minutes 00:17:30.847 00:17:30.847 Number of Queues 00:17:30.847 ================ 00:17:30.847 Number of I/O Submission Queues: 127 00:17:30.847 Number of I/O Completion Queues: 127 00:17:30.847 00:17:30.847 Active Namespaces 00:17:30.847 ================= 00:17:30.847 Namespace ID:1 00:17:30.847 Error Recovery Timeout: Unlimited 00:17:30.847 Command Set Identifier: NVM (00h) 00:17:30.847 Deallocate: Supported 00:17:30.847 Deallocated/Unwritten Error: Not Supported 00:17:30.847 Deallocated Read Value: Unknown 00:17:30.847 Deallocate in Write Zeroes: Not Supported 00:17:30.847 Deallocated Guard Field: 0xFFFF 00:17:30.847 Flush: Supported 00:17:30.847 Reservation: Supported 00:17:30.847 Namespace Sharing Capabilities: Multiple Controllers 00:17:30.847 Size (in LBAs): 131072 (0GiB) 00:17:30.847 Capacity (in LBAs): 131072 (0GiB) 00:17:30.847 Utilization (in LBAs): 131072 (0GiB) 00:17:30.847 NGUID: CEE245CF5C954669BC92CBD49658F3DD 00:17:30.847 UUID: cee245cf-5c95-4669-bc92-cbd49658f3dd 00:17:30.847 Thin Provisioning: Not Supported 00:17:30.847 Per-NS Atomic Units: Yes 00:17:30.847 Atomic Boundary Size (Normal): 0 00:17:30.847 Atomic Boundary Size (PFail): 0 00:17:30.847 Atomic Boundary Offset: 0 00:17:30.847 Maximum Single Source Range Length: 65535 00:17:30.847 Maximum Copy Length: 65535 00:17:30.847 Maximum Source Range Count: 1 00:17:30.847 NGUID/EUI64 Never Reused: No 00:17:30.847 Namespace Write Protected: No 00:17:30.847 Number of LBA Formats: 1 00:17:30.847 Current LBA Format: LBA Format #00 00:17:30.847 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:30.847 00:17:30.847 11:45:03 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:36.126 Initializing NVMe Controllers 00:17:36.126 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:36.126 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:36.126 Initialization complete. Launching workers. 00:17:36.126 ======================================================== 00:17:36.126 Latency(us) 00:17:36.126 Device Information : IOPS MiB/s Average min max 00:17:36.126 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 38259.03 149.45 3345.03 927.44 10818.71 00:17:36.126 ======================================================== 00:17:36.126 Total : 38259.03 149.45 3345.03 927.44 10818.71 00:17:36.126 00:17:36.126 11:45:09 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:41.399 Initializing NVMe Controllers 00:17:41.399 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:41.399 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:41.399 Initialization complete. Launching workers. 00:17:41.399 ======================================================== 00:17:41.399 Latency(us) 00:17:41.399 Device Information : IOPS MiB/s Average min max 00:17:41.399 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15824.38 61.81 8094.25 5006.02 16153.14 00:17:41.399 ======================================================== 00:17:41.399 Total : 15824.38 61.81 8094.25 5006.02 16153.14 00:17:41.399 00:17:41.399 11:45:14 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:47.957 Initializing NVMe Controllers 00:17:47.957 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.957 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.957 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:47.957 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:47.957 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:47.957 Initialization complete. Launching workers. 00:17:47.957 Starting thread on core 2 00:17:47.957 Starting thread on core 3 00:17:47.957 Starting thread on core 1 00:17:47.957 11:45:19 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:50.533 Initializing NVMe Controllers 00:17:50.533 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.533 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.533 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:50.533 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:50.533 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:50.533 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:50.533 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:50.533 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:50.533 Initialization complete. Launching workers. 00:17:50.533 Starting thread on core 1 with urgent priority queue 00:17:50.533 Starting thread on core 2 with urgent priority queue 00:17:50.533 Starting thread on core 3 with urgent priority queue 00:17:50.533 Starting thread on core 0 with urgent priority queue 00:17:50.533 SPDK bdev Controller (SPDK1 ) core 0: 7385.33 IO/s 13.54 secs/100000 ios 00:17:50.533 SPDK bdev Controller (SPDK1 ) core 1: 7800.33 IO/s 12.82 secs/100000 ios 00:17:50.533 SPDK bdev Controller (SPDK1 ) core 2: 7562.33 IO/s 13.22 secs/100000 ios 00:17:50.533 SPDK bdev Controller (SPDK1 ) core 3: 7438.33 IO/s 13.44 secs/100000 ios 00:17:50.533 ======================================================== 00:17:50.533 00:17:50.533 11:45:23 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:50.533 Initializing NVMe Controllers 00:17:50.533 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.533 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.533 Namespace ID: 1 size: 0GB 00:17:50.533 Initialization complete. 00:17:50.533 INFO: using host memory buffer for IO 00:17:50.533 Hello world! 00:17:50.533 11:45:23 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:51.915 Initializing NVMe Controllers 00:17:51.915 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:51.915 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:51.915 Initialization complete. Launching workers. 00:17:51.915 submit (in ns) avg, min, max = 5213.1, 3028.8, 4021800.9 00:17:51.915 complete (in ns) avg, min, max = 21013.6, 1658.5, 4039627.1 00:17:51.915 00:17:51.915 Submit histogram 00:17:51.915 ================ 00:17:51.915 Range in us Cumulative Count 00:17:51.915 3.018 - 3.032: 0.0059% ( 1) 00:17:51.915 3.032 - 3.046: 0.3090% ( 51) 00:17:51.915 3.046 - 3.060: 1.7352% ( 240) 00:17:51.915 3.060 - 3.074: 3.1614% ( 240) 00:17:51.915 3.074 - 3.088: 5.0155% ( 312) 00:17:51.915 3.088 - 3.102: 8.0699% ( 514) 00:17:51.915 3.102 - 3.116: 11.7423% ( 618) 00:17:51.915 3.116 - 3.130: 15.9437% ( 707) 00:17:51.915 3.130 - 3.144: 21.2860% ( 899) 00:17:51.915 3.144 - 3.158: 25.9330% ( 782) 00:17:51.915 3.158 - 3.172: 31.1326% ( 875) 00:17:51.915 3.172 - 3.186: 36.5641% ( 914) 00:17:51.915 3.186 - 3.200: 41.9658% ( 909) 00:17:51.915 3.200 - 3.214: 47.4923% ( 930) 00:17:51.915 3.214 - 3.228: 52.3235% ( 813) 00:17:51.915 3.228 - 3.242: 56.2990% ( 669) 00:17:51.915 3.242 - 3.256: 59.7575% ( 582) 00:17:51.915 3.256 - 3.270: 63.0021% ( 546) 00:17:51.915 3.270 - 3.284: 65.2306% ( 375) 00:17:51.915 3.284 - 3.298: 66.8885% ( 279) 00:17:51.915 3.298 - 3.312: 68.2315% ( 226) 00:17:51.915 3.312 - 3.326: 69.3130% ( 182) 00:17:51.915 3.326 - 3.340: 70.4659% ( 194) 00:17:51.915 3.340 - 3.354: 71.7257% ( 212) 00:17:51.915 3.354 - 3.368: 72.9023% ( 198) 00:17:51.915 3.368 - 3.382: 74.2215% ( 222) 00:17:51.915 3.382 - 3.396: 75.7785% ( 262) 00:17:51.915 3.396 - 3.410: 77.4186% ( 276) 00:17:51.915 3.410 - 3.424: 78.9101% ( 251) 00:17:51.915 3.424 - 3.438: 80.6513% ( 293) 00:17:51.915 3.438 - 3.452: 82.4459% ( 302) 00:17:51.915 3.452 - 3.466: 84.3594% ( 322) 00:17:51.915 3.466 - 3.479: 85.8688% ( 254) 00:17:51.915 3.479 - 3.493: 87.4376% ( 264) 00:17:51.915 3.493 - 3.507: 88.8935% ( 245) 00:17:51.915 3.507 - 3.521: 90.0998% ( 203) 00:17:51.915 3.521 - 3.535: 91.1695% ( 180) 00:17:51.916 3.535 - 3.549: 92.0252% ( 144) 00:17:51.916 3.549 - 3.563: 92.7561% ( 123) 00:17:51.916 3.563 - 3.577: 93.3979% ( 108) 00:17:51.916 3.577 - 3.605: 94.5270% ( 190) 00:17:51.916 3.605 - 3.633: 95.4362% ( 153) 00:17:51.916 3.633 - 3.661: 96.2146% ( 131) 00:17:51.916 3.661 - 3.689: 96.9277% ( 120) 00:17:51.916 3.689 - 3.717: 97.3318% ( 68) 00:17:51.916 3.717 - 3.745: 97.7419% ( 69) 00:17:51.916 3.745 - 3.773: 98.0152% ( 46) 00:17:51.916 3.773 - 3.801: 98.1578% ( 24) 00:17:51.916 3.801 - 3.829: 98.2826% ( 21) 00:17:51.916 3.829 - 3.857: 98.3718% ( 15) 00:17:51.916 3.857 - 3.885: 98.4966% ( 21) 00:17:51.916 3.885 - 3.913: 98.6273% ( 22) 00:17:51.916 3.913 - 3.941: 98.7461% ( 20) 00:17:51.916 3.941 - 3.969: 98.9066% ( 27) 00:17:51.916 3.969 - 3.997: 99.0789% ( 29) 00:17:51.916 3.997 - 4.024: 99.1621% ( 14) 00:17:51.916 4.024 - 4.052: 99.2394% ( 13) 00:17:51.916 4.052 - 4.080: 99.2928% ( 9) 00:17:51.916 4.080 - 4.108: 99.3463% ( 9) 00:17:51.916 4.108 - 4.136: 99.3582% ( 2) 00:17:51.916 4.136 - 4.164: 99.3820% ( 4) 00:17:51.916 4.164 - 4.192: 99.4117% ( 5) 00:17:51.916 4.192 - 4.220: 99.4176% ( 1) 00:17:51.916 4.220 - 4.248: 99.4295% ( 2) 00:17:51.916 4.248 - 4.276: 99.4355% ( 1) 00:17:51.916 4.276 - 4.304: 99.4473% ( 2) 00:17:51.916 4.304 - 4.332: 99.4533% ( 1) 00:17:51.916 4.332 - 4.360: 99.4711% ( 3) 00:17:51.916 4.360 - 4.388: 99.4771% ( 1) 00:17:51.916 4.388 - 4.416: 99.4830% ( 1) 00:17:51.916 4.416 - 4.444: 99.4889% ( 1) 00:17:51.916 4.500 - 4.528: 99.5008% ( 2) 00:17:51.916 4.555 - 4.583: 99.5187% ( 3) 00:17:51.916 4.639 - 4.667: 99.5246% ( 1) 00:17:51.916 4.695 - 4.723: 99.5305% ( 1) 00:17:51.916 4.751 - 4.779: 99.5365% ( 1) 00:17:51.916 4.891 - 4.919: 99.5424% ( 1) 00:17:51.916 4.919 - 4.947: 99.5484% ( 1) 00:17:51.916 5.086 - 5.114: 99.5543% ( 1) 00:17:51.916 5.506 - 5.534: 99.5603% ( 1) 00:17:51.916 5.841 - 5.869: 99.5662% ( 1) 00:17:51.916 6.735 - 6.763: 99.5721% ( 1) 00:17:51.916 6.847 - 6.875: 99.5781% ( 1) 00:17:51.916 7.043 - 7.071: 99.5900% ( 2) 00:17:51.916 7.099 - 7.127: 99.6019% ( 2) 00:17:51.916 7.127 - 7.155: 99.6078% ( 1) 00:17:51.916 7.210 - 7.266: 99.6137% ( 1) 00:17:51.916 7.322 - 7.378: 99.6197% ( 1) 00:17:51.916 7.434 - 7.490: 99.6316% ( 2) 00:17:51.916 7.546 - 7.602: 99.6494% ( 3) 00:17:51.916 7.602 - 7.658: 99.6553% ( 1) 00:17:51.916 7.658 - 7.714: 99.6672% ( 2) 00:17:51.916 7.714 - 7.769: 99.6732% ( 1) 00:17:51.916 7.881 - 7.937: 99.6850% ( 2) 00:17:51.916 7.937 - 7.993: 99.6910% ( 1) 00:17:51.916 7.993 - 8.049: 99.6969% ( 1) 00:17:51.916 8.049 - 8.105: 99.7148% ( 3) 00:17:51.916 8.105 - 8.161: 99.7207% ( 1) 00:17:51.916 8.217 - 8.272: 99.7326% ( 2) 00:17:51.916 8.272 - 8.328: 99.7385% ( 1) 00:17:51.916 8.440 - 8.496: 99.7445% ( 1) 00:17:51.916 8.496 - 8.552: 99.7564% ( 2) 00:17:51.916 8.552 - 8.608: 99.7623% ( 1) 00:17:51.916 8.608 - 8.664: 99.7682% ( 1) 00:17:51.916 8.664 - 8.720: 99.7801% ( 2) 00:17:51.916 8.720 - 8.776: 99.7861% ( 1) 00:17:51.916 8.776 - 8.831: 99.7920% ( 1) 00:17:51.916 8.831 - 8.887: 99.7980% ( 1) 00:17:51.916 8.943 - 8.999: 99.8039% ( 1) 00:17:51.916 9.055 - 9.111: 99.8158% ( 2) 00:17:51.916 9.111 - 9.167: 99.8217% ( 1) 00:17:51.916 9.167 - 9.223: 99.8277% ( 1) 00:17:51.916 9.614 - 9.670: 99.8336% ( 1) 00:17:51.916 9.726 - 9.782: 99.8396% ( 1) 00:17:51.916 10.341 - 10.397: 99.8455% ( 1) 00:17:51.916 10.508 - 10.564: 99.8514% ( 1) 00:17:51.916 10.732 - 10.788: 99.8574% ( 1) 00:17:51.916 10.844 - 10.900: 99.8633% ( 1) 00:17:51.916 10.955 - 11.011: 99.8693% ( 1) 00:17:51.916 11.235 - 11.291: 99.8752% ( 1) 00:17:51.916 13.974 - 14.030: 99.8812% ( 1) 00:17:51.916 14.030 - 14.086: 99.8930% ( 2) 00:17:51.916 14.086 - 14.141: 99.8990% ( 1) 00:17:51.916 14.197 - 14.253: 99.9049% ( 1) 00:17:51.916 15.427 - 15.539: 99.9109% ( 1) 00:17:51.916 15.539 - 15.651: 99.9168% ( 1) 00:17:51.916 17.998 - 18.110: 99.9227% ( 1) 00:17:51.916 18.893 - 19.004: 99.9287% ( 1) 00:17:51.916 19.004 - 19.116: 99.9346% ( 1) 00:17:51.916 19.340 - 19.452: 99.9406% ( 1) 00:17:51.916 19.452 - 19.563: 99.9525% ( 2) 00:17:51.916 4006.568 - 4035.186: 100.0000% ( 8) 00:17:51.916 00:17:51.916 Complete histogram 00:17:51.916 ================== 00:17:51.916 Range in us Cumulative Count 00:17:51.916 1.656 - 1.663: 0.0773% ( 13) 00:17:51.916 1.663 - 1.670: 0.8379% ( 128) 00:17:51.916 1.670 - 1.677: 4.6648% ( 644) 00:17:51.916 1.677 - 1.684: 12.7526% ( 1361) 00:17:51.916 1.684 - 1.691: 22.7121% ( 1676) 00:17:51.916 1.691 - 1.698: 29.5401% ( 1149) 00:17:51.916 1.698 - 1.705: 33.5750% ( 679) 00:17:51.916 1.705 - 1.712: 35.8747% ( 387) 00:17:51.916 1.712 - 1.719: 38.1388% ( 381) 00:17:51.916 1.719 - 1.726: 42.8571% ( 794) 00:17:51.916 1.726 - 1.733: 52.9534% ( 1699) 00:17:51.916 1.733 - 1.740: 64.5591% ( 1953) 00:17:51.916 1.740 - 1.747: 73.2707% ( 1466) 00:17:51.916 1.747 - 1.754: 78.9339% ( 953) 00:17:51.916 1.754 - 1.761: 82.3152% ( 569) 00:17:51.916 1.761 - 1.768: 84.4545% ( 360) 00:17:51.916 1.768 - 1.775: 85.9104% ( 245) 00:17:51.916 1.775 - 1.782: 87.0454% ( 191) 00:17:51.916 1.782 - 1.789: 87.9130% ( 146) 00:17:51.916 1.789 - 1.803: 88.6618% ( 126) 00:17:51.916 1.803 - 1.817: 89.1550% ( 83) 00:17:51.916 1.817 - 1.831: 90.9437% ( 301) 00:17:51.916 1.831 - 1.845: 93.3266% ( 401) 00:17:51.916 1.845 - 1.859: 94.5210% ( 201) 00:17:51.916 1.859 - 1.872: 95.0499% ( 89) 00:17:51.916 1.872 - 1.886: 95.2044% ( 26) 00:17:51.916 1.886 - 1.900: 95.2638% ( 10) 00:17:51.916 1.900 - 1.914: 95.3233% ( 10) 00:17:51.916 1.914 - 1.928: 95.3589% ( 6) 00:17:51.916 1.928 - 1.942: 95.3768% ( 3) 00:17:51.916 1.942 - 1.956: 95.3886% ( 2) 00:17:51.916 1.956 - 1.970: 95.4302% ( 7) 00:17:51.916 1.970 - 1.984: 95.4659% ( 6) 00:17:51.916 1.984 - 1.998: 95.5491% ( 14) 00:17:51.916 1.998 - 2.012: 95.8522% ( 51) 00:17:51.916 2.012 - 2.026: 96.3216% ( 79) 00:17:51.916 2.026 - 2.040: 96.4583% ( 23) 00:17:51.916 2.040 - 2.054: 96.6425% ( 31) 00:17:51.916 2.054 - 2.068: 97.2724% ( 106) 00:17:51.916 2.068 - 2.082: 98.1994% ( 156) 00:17:51.916 2.082 - 2.096: 98.6689% ( 79) 00:17:51.916 2.096 - 2.110: 98.7996% ( 22) 00:17:51.916 2.110 - 2.124: 98.8709% ( 12) 00:17:51.916 2.124 - 2.138: 98.8947% ( 4) 00:17:51.916 2.138 - 2.152: 98.9066% ( 2) 00:17:51.916 2.152 - 2.166: 98.9304% ( 4) 00:17:51.916 2.166 - 2.180: 98.9660% ( 6) 00:17:51.916 2.180 - 2.194: 98.9720% ( 1) 00:17:51.916 2.194 - 2.208: 98.9898% ( 3) 00:17:51.916 2.208 - 2.222: 99.0017% ( 2) 00:17:51.916 2.222 - 2.236: 99.0135% ( 2) 00:17:51.916 2.236 - 2.250: 99.0433% ( 5) 00:17:51.916 2.250 - 2.264: 99.0611% ( 3) 00:17:51.916 2.264 - 2.278: 99.0670% ( 1) 00:17:51.916 2.292 - 2.306: 99.0730% ( 1) 00:17:51.916 2.306 - 2.320: 99.0789% ( 1) 00:17:51.916 2.320 - 2.334: 99.0849% ( 1) 00:17:51.916 2.334 - 2.348: 99.0908% ( 1) 00:17:51.916 2.348 - 2.362: 99.0967% ( 1) 00:17:51.916 2.376 - 2.390: 99.1086% ( 2) 00:17:51.916 2.390 - 2.403: 99.1205% ( 2) 00:17:51.916 2.403 - 2.417: 99.1265% ( 1) 00:17:51.916 2.459 - 2.473: 99.1324% ( 1) 00:17:51.916 2.473 - 2.487: 99.1502% ( 3) 00:17:51.916 2.487 - 2.501: 99.1562% ( 1) 00:17:51.916 2.501 - 2.515: 99.1621% ( 1) 00:17:51.916 2.543 - 2.557: 99.1681% ( 1) 00:17:51.916 2.613 - 2.627: 99.1740% ( 1) 00:17:51.916 2.655 - 2.669: 99.1799% ( 1) 00:17:51.916 2.683 - 2.697: 99.1859% ( 1) 00:17:51.916 2.711 - 2.725: 99.1918% ( 1) 00:17:51.916 2.725 - 2.739: 99.1978% ( 1) 00:17:51.916 2.781 - 2.795: 99.2097% ( 2) 00:17:51.916 2.809 - 2.823: 99.2156% ( 1) 00:17:51.916 2.851 - 2.865: 99.2215% ( 1) 00:17:51.916 2.934 - 2.948: 99.2334% ( 2) 00:17:51.916 2.962 - 2.976: 99.2394% ( 1) 00:17:51.916 2.976 - 2.990: 99.2453% ( 1) 00:17:51.916 3.004 - 3.018: 99.2512% ( 1) 00:17:51.916 3.088 - 3.102: 99.2572% ( 1) 00:17:51.916 3.228 - 3.242: 99.2631% ( 1) 00:17:51.916 3.424 - 3.438: 99.2691% ( 1) 00:17:51.916 4.080 - 4.108: 99.2750% ( 1) 00:17:51.916 5.394 - 5.422: 99.2810% ( 1) 00:17:51.916 5.422 - 5.450: 99.2869% ( 1) 00:17:51.916 5.478 - 5.506: 99.2928% ( 1) 00:17:51.916 5.617 - 5.645: 99.2988% ( 1) 00:17:51.916 5.645 - 5.673: 99.3047% ( 1) 00:17:51.916 5.757 - 5.785: 99.3107% ( 1) 00:17:51.916 5.953 - 5.981: 99.3166% ( 1) 00:17:51.917 6.121 - 6.148: 99.3226% ( 1) 00:17:51.917 6.260 - 6.288: 99.3344% ( 2) 00:17:51.917 6.288 - 6.316: 99.3404% ( 1) 00:17:51.917 6.344 - 6.372: 99.3523% ( 2) 00:17:51.917 6.456 - 6.484: 99.3582% ( 1) 00:17:51.917 6.484 - 6.512: 99.3642% ( 1) 00:17:51.917 6.735 - 6.763: 99.3701% ( 1) 00:17:51.917 6.959 - 6.987: 99.3760% ( 1) 00:17:51.917 7.015 - 7.043: 99.3820% ( 1) 00:17:51.917 7.071 - 7.099: 99.3879% ( 1) 00:17:51.917 7.099 - 7.127: 99.4117% ( 4) 00:17:51.917 7.155 - 7.210: 99.4176% ( 1) 00:17:51.917 7.210 - 7.266: 99.4236% ( 1) 00:17:51.917 7.602 - 7.658: 99.4295% ( 1) 00:17:51.917 8.161 - 8.217: 99.4355% ( 1) 00:17:51.917 8.217 - 8.272: 99.4414% ( 1) 00:17:51.917 8.328 - 8.384: 99.4473% ( 1) 00:17:51.917 11.067 - 11.123: 99.4533% ( 1) 00:17:51.917 11.570 - 11.626: 99.4592% ( 1) 00:17:51.917 11.962 - 12.017: 99.4652% ( 1) 00:17:51.917 12.129 - 12.185: 99.4711% ( 1) 00:17:51.917 12.465 - 12.521: 99.4771% ( 1) 00:17:51.917 12.632 - 12.688: 99.4830% ( 1) 00:17:51.917 14.980 - 15.092: 99.4889% ( 1) 00:17:51.917 17.663 - 17.775: 99.5008% ( 2) 00:17:51.917 17.775 - 17.886: 99.5068% ( 1) 00:17:51.917 17.886 - 17.998: 99.5127% ( 1) 00:17:51.917 18.110 - 18.222: 99.5187% ( 1) 00:17:51.917 3019.235 - 3033.544: 99.5246% ( 1) 00:17:51.917 3949.331 - 3977.949: 99.5305% ( 1) 00:17:51.917 3977.949 - 4006.568: 99.5603% ( 5) 00:17:51.917 4006.568 - 4035.186: 99.9941% ( 73) 00:17:51.917 4035.186 - 4063.804: 100.0000% ( 1) 00:17:51.917 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:51.917 [2024-11-20 11:45:24.867154] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:51.917 [ 00:17:51.917 { 00:17:51.917 "allow_any_host": true, 00:17:51.917 "hosts": [], 00:17:51.917 "listen_addresses": [], 00:17:51.917 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:51.917 "subtype": "Discovery" 00:17:51.917 }, 00:17:51.917 { 00:17:51.917 "allow_any_host": true, 00:17:51.917 "hosts": [], 00:17:51.917 "listen_addresses": [ 00:17:51.917 { 00:17:51.917 "adrfam": "IPv4", 00:17:51.917 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:51.917 "transport": "VFIOUSER", 00:17:51.917 "trsvcid": "0", 00:17:51.917 "trtype": "VFIOUSER" 00:17:51.917 } 00:17:51.917 ], 00:17:51.917 "max_cntlid": 65519, 00:17:51.917 "max_namespaces": 32, 00:17:51.917 "min_cntlid": 1, 00:17:51.917 "model_number": "SPDK bdev Controller", 00:17:51.917 "namespaces": [ 00:17:51.917 { 00:17:51.917 "bdev_name": "Malloc1", 00:17:51.917 "name": "Malloc1", 00:17:51.917 "nguid": "CEE245CF5C954669BC92CBD49658F3DD", 00:17:51.917 "nsid": 1, 00:17:51.917 "uuid": "cee245cf-5c95-4669-bc92-cbd49658f3dd" 00:17:51.917 } 00:17:51.917 ], 00:17:51.917 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:51.917 "serial_number": "SPDK1", 00:17:51.917 "subtype": "NVMe" 00:17:51.917 }, 00:17:51.917 { 00:17:51.917 "allow_any_host": true, 00:17:51.917 "hosts": [], 00:17:51.917 "listen_addresses": [ 00:17:51.917 { 00:17:51.917 "adrfam": "IPv4", 00:17:51.917 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:51.917 "transport": "VFIOUSER", 00:17:51.917 "trsvcid": "0", 00:17:51.917 "trtype": "VFIOUSER" 00:17:51.917 } 00:17:51.917 ], 00:17:51.917 "max_cntlid": 65519, 00:17:51.917 "max_namespaces": 32, 00:17:51.917 "min_cntlid": 1, 00:17:51.917 "model_number": "SPDK bdev Controller", 00:17:51.917 "namespaces": [ 00:17:51.917 { 00:17:51.917 "bdev_name": "Malloc2", 00:17:51.917 "name": "Malloc2", 00:17:51.917 "nguid": "500D6527E89742A7AB215288298D81F0", 00:17:51.917 "nsid": 1, 00:17:51.917 "uuid": "500d6527-e897-42a7-ab21-5288298d81f0" 00:17:51.917 } 00:17:51.917 ], 00:17:51.917 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:51.917 "serial_number": "SPDK2", 00:17:51.917 "subtype": "NVMe" 00:17:51.917 } 00:17:51.917 ] 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71361 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:51.917 11:45:24 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:51.917 11:45:24 -- common/autotest_common.sh@1254 -- # local i=0 00:17:51.917 11:45:24 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:51.917 11:45:24 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:17:51.917 11:45:24 -- common/autotest_common.sh@1257 -- # i=1 00:17:51.917 11:45:24 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:52.177 11:45:24 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.177 11:45:24 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:17:52.177 11:45:24 -- common/autotest_common.sh@1257 -- # i=2 00:17:52.177 11:45:24 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:52.177 11:45:25 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.177 11:45:25 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.177 11:45:25 -- common/autotest_common.sh@1265 -- # return 0 00:17:52.177 11:45:25 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:52.177 11:45:25 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:52.436 Malloc3 00:17:52.436 11:45:25 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:52.696 11:45:25 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:52.696 Asynchronous Event Request test 00:17:52.696 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:52.696 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:52.696 Registering asynchronous event callbacks... 00:17:52.696 Starting namespace attribute notice tests for all controllers... 00:17:52.696 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:52.696 aer_cb - Changed Namespace 00:17:52.696 Cleaning up... 00:17:52.956 [ 00:17:52.956 { 00:17:52.956 "allow_any_host": true, 00:17:52.956 "hosts": [], 00:17:52.956 "listen_addresses": [], 00:17:52.956 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:52.956 "subtype": "Discovery" 00:17:52.956 }, 00:17:52.956 { 00:17:52.956 "allow_any_host": true, 00:17:52.956 "hosts": [], 00:17:52.956 "listen_addresses": [ 00:17:52.956 { 00:17:52.956 "adrfam": "IPv4", 00:17:52.956 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:52.956 "transport": "VFIOUSER", 00:17:52.956 "trsvcid": "0", 00:17:52.956 "trtype": "VFIOUSER" 00:17:52.956 } 00:17:52.956 ], 00:17:52.956 "max_cntlid": 65519, 00:17:52.956 "max_namespaces": 32, 00:17:52.956 "min_cntlid": 1, 00:17:52.956 "model_number": "SPDK bdev Controller", 00:17:52.956 "namespaces": [ 00:17:52.956 { 00:17:52.956 "bdev_name": "Malloc1", 00:17:52.956 "name": "Malloc1", 00:17:52.956 "nguid": "CEE245CF5C954669BC92CBD49658F3DD", 00:17:52.956 "nsid": 1, 00:17:52.956 "uuid": "cee245cf-5c95-4669-bc92-cbd49658f3dd" 00:17:52.956 }, 00:17:52.956 { 00:17:52.956 "bdev_name": "Malloc3", 00:17:52.956 "name": "Malloc3", 00:17:52.956 "nguid": "3E9BA66CD9E146EABD437AECC213A05E", 00:17:52.956 "nsid": 2, 00:17:52.956 "uuid": "3e9ba66c-d9e1-46ea-bd43-7aecc213a05e" 00:17:52.956 } 00:17:52.956 ], 00:17:52.956 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:52.956 "serial_number": "SPDK1", 00:17:52.956 "subtype": "NVMe" 00:17:52.956 }, 00:17:52.956 { 00:17:52.956 "allow_any_host": true, 00:17:52.956 "hosts": [], 00:17:52.956 "listen_addresses": [ 00:17:52.956 { 00:17:52.956 "adrfam": "IPv4", 00:17:52.956 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:52.956 "transport": "VFIOUSER", 00:17:52.956 "trsvcid": "0", 00:17:52.956 "trtype": "VFIOUSER" 00:17:52.956 } 00:17:52.956 ], 00:17:52.956 "max_cntlid": 65519, 00:17:52.956 "max_namespaces": 32, 00:17:52.956 "min_cntlid": 1, 00:17:52.956 "model_number": "SPDK bdev Controller", 00:17:52.956 "namespaces": [ 00:17:52.956 { 00:17:52.956 "bdev_name": "Malloc2", 00:17:52.956 "name": "Malloc2", 00:17:52.956 "nguid": "500D6527E89742A7AB215288298D81F0", 00:17:52.956 "nsid": 1, 00:17:52.956 "uuid": "500d6527-e897-42a7-ab21-5288298d81f0" 00:17:52.956 } 00:17:52.956 ], 00:17:52.956 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:52.956 "serial_number": "SPDK2", 00:17:52.956 "subtype": "NVMe" 00:17:52.956 } 00:17:52.956 ] 00:17:52.956 11:45:25 -- target/nvmf_vfio_user.sh@44 -- # wait 71361 00:17:52.956 11:45:25 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:52.956 11:45:25 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:52.956 11:45:25 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:52.956 11:45:25 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:52.956 [2024-11-20 11:45:25.802383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:52.956 [2024-11-20 11:45:25.802426] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71398 ] 00:17:52.956 [2024-11-20 11:45:25.933840] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:52.956 [2024-11-20 11:45:25.942859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:52.956 [2024-11-20 11:45:25.942885] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9715aed000 00:17:52.956 [2024-11-20 11:45:25.943854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.944859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.945862] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.946866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.947869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.948867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.949868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.950869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:52.956 [2024-11-20 11:45:25.951870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:52.956 [2024-11-20 11:45:25.951886] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f971528d000 00:17:52.956 [2024-11-20 11:45:25.952931] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:52.956 [2024-11-20 11:45:25.965759] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:52.956 [2024-11-20 11:45:25.965789] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:52.956 [2024-11-20 11:45:25.970875] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:52.956 [2024-11-20 11:45:25.970922] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:52.956 [2024-11-20 11:45:25.971000] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:52.956 [2024-11-20 11:45:25.971022] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:52.956 [2024-11-20 11:45:25.971029] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:52.956 [2024-11-20 11:45:25.971865] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:52.956 [2024-11-20 11:45:25.971877] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:52.956 [2024-11-20 11:45:25.971884] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:52.956 [2024-11-20 11:45:25.972866] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:52.956 [2024-11-20 11:45:25.972875] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:52.956 [2024-11-20 11:45:25.972882] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:52.956 [2024-11-20 11:45:25.973875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:52.956 [2024-11-20 11:45:25.973885] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:52.956 [2024-11-20 11:45:25.974882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:52.956 [2024-11-20 11:45:25.974892] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:52.956 [2024-11-20 11:45:25.974895] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:52.956 [2024-11-20 11:45:25.974900] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:52.956 [2024-11-20 11:45:25.975004] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:52.956 [2024-11-20 11:45:25.975009] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:52.956 [2024-11-20 11:45:25.975013] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:52.957 [2024-11-20 11:45:25.975885] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:52.957 [2024-11-20 11:45:25.976886] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:52.957 [2024-11-20 11:45:25.977897] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:52.957 [2024-11-20 11:45:25.978914] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:52.957 [2024-11-20 11:45:25.979906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:52.957 [2024-11-20 11:45:25.979921] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:52.957 [2024-11-20 11:45:25.979926] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:52.957 [2024-11-20 11:45:25.979944] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:52.957 [2024-11-20 11:45:25.979957] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:52.957 [2024-11-20 11:45:25.979971] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:52.957 [2024-11-20 11:45:25.979975] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:52.957 [2024-11-20 11:45:25.979987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:52.957 [2024-11-20 11:45:25.987678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:52.957 [2024-11-20 11:45:25.987699] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:52.957 [2024-11-20 11:45:25.987703] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:52.957 [2024-11-20 11:45:25.987706] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:52.957 [2024-11-20 11:45:25.987710] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:52.957 [2024-11-20 11:45:25.987713] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:52.957 [2024-11-20 11:45:25.987717] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:52.957 [2024-11-20 11:45:25.987720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:52.957 [2024-11-20 11:45:25.987731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:52.957 [2024-11-20 11:45:25.987740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:52.957 [2024-11-20 11:45:25.995660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:52.957 [2024-11-20 11:45:25.995698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.957 [2024-11-20 11:45:25.995706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.957 [2024-11-20 11:45:25.995712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.957 [2024-11-20 11:45:25.995718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.957 [2024-11-20 11:45:25.995722] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:52.957 [2024-11-20 11:45:25.995730] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:52.957 [2024-11-20 11:45:25.995737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:53.219 [2024-11-20 11:45:26.003676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:53.219 [2024-11-20 11:45:26.003689] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:53.219 [2024-11-20 11:45:26.003693] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:53.219 [2024-11-20 11:45:26.003698] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:53.219 [2024-11-20 11:45:26.003706] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:53.219 [2024-11-20 11:45:26.003713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:53.219 [2024-11-20 11:45:26.011677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:53.219 [2024-11-20 11:45:26.011736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:53.219 [2024-11-20 11:45:26.011742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:53.219 [2024-11-20 11:45:26.011749] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:53.219 [2024-11-20 11:45:26.011753] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:53.220 [2024-11-20 11:45:26.011759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.019675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.019694] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:53.220 [2024-11-20 11:45:26.019702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.019709] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.019714] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:53.220 [2024-11-20 11:45:26.019717] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.220 [2024-11-20 11:45:26.019722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.027695] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.027702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.027708] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:53.220 [2024-11-20 11:45:26.027711] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.220 [2024-11-20 11:45:26.027716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.035661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.035676] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.035681] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.035689] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.035694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.035697] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.035701] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:53.220 [2024-11-20 11:45:26.035704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:53.220 [2024-11-20 11:45:26.035707] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:53.220 [2024-11-20 11:45:26.035723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.043672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.043703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.051679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.051715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.059671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.059699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.067678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:53.220 [2024-11-20 11:45:26.067705] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:53.220 [2024-11-20 11:45:26.067709] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:53.220 [2024-11-20 11:45:26.067712] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:53.220 [2024-11-20 11:45:26.067714] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:53.220 [2024-11-20 11:45:26.067720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:53.220 [2024-11-20 11:45:26.067726] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:53.220 [2024-11-20 11:45:26.067729] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:53.220 [2024-11-20 11:45:26.067733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.067738] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:53.220 [2024-11-20 11:45:26.067741] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.220 [2024-11-20 11:45:26.067746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.220 [2024-11-20 11:45:26.067752] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:53.220 [2024-11-20 11:45:26.067755] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:53.220 [2024-11-20 11:45:26.067759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:53.220 ===================================================== 00:17:53.220 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:53.220 ===================================================== 00:17:53.220 Controller Capabilities/Features 00:17:53.220 ================================ 00:17:53.220 Vendor ID: 4e58 00:17:53.220 Subsystem Vendor ID: 4e58 00:17:53.220 Serial Number: SPDK2 00:17:53.220 Model Number: SPDK bdev Controller 00:17:53.220 Firmware Version: 24.01.1 00:17:53.220 Recommended Arb Burst: 6 00:17:53.220 IEEE OUI Identifier: 8d 6b 50 00:17:53.220 Multi-path I/O 00:17:53.220 May have multiple subsystem ports: Yes 00:17:53.220 May have multiple controllers: Yes 00:17:53.220 Associated with SR-IOV VF: No 00:17:53.220 Max Data Transfer Size: 131072 00:17:53.220 Max Number of Namespaces: 32 00:17:53.220 Max Number of I/O Queues: 127 00:17:53.220 NVMe Specification Version (VS): 1.3 00:17:53.220 NVMe Specification Version (Identify): 1.3 00:17:53.220 Maximum Queue Entries: 256 00:17:53.220 Contiguous Queues Required: Yes 00:17:53.220 Arbitration Mechanisms Supported 00:17:53.220 Weighted Round Robin: Not Supported 00:17:53.220 Vendor Specific: Not Supported 00:17:53.220 Reset Timeout: 15000 ms 00:17:53.220 Doorbell Stride: 4 bytes 00:17:53.220 NVM Subsystem Reset: Not Supported 00:17:53.220 Command Sets Supported 00:17:53.220 NVM Command Set: Supported 00:17:53.220 Boot Partition: Not Supported 00:17:53.220 Memory Page Size Minimum: 4096 bytes 00:17:53.220 Memory Page Size Maximum: 4096 bytes 00:17:53.220 Persistent Memory Region: Not Supported 00:17:53.220 Optional Asynchronous Events Supported 00:17:53.220 Namespace Attribute Notices: Supported 00:17:53.220 Firmware Activation Notices: Not Supported 00:17:53.220 ANA Change Notices: Not Supported 00:17:53.220 PLE Aggregate Log Change Notices: Not Supported 00:17:53.220 LBA Status Info Alert Notices: Not Supported 00:17:53.220 EGE Aggregate Log Change Notices: Not Supported 00:17:53.220 Normal NVM Subsystem Shutdown event: Not Supported 00:17:53.220 Zone Descriptor Change Notices: Not Supported 00:17:53.220 Discovery Log Change Notices: Not Supported 00:17:53.220 Controller Attributes 00:17:53.220 128-bit Host Identifier: Supported 00:17:53.220 Non-Operational Permissive Mode: Not Supported 00:17:53.220 NVM Sets: Not Supported 00:17:53.220 Read Recovery Levels: Not Supported 00:17:53.220 Endurance Groups: Not Supported 00:17:53.220 Predictable Latency Mode: Not Supported 00:17:53.220 Traffic Based Keep ALive: Not Supported 00:17:53.220 Namespace Granularity: Not Supported 00:17:53.220 SQ Associations: Not Supported 00:17:53.220 UUID List: Not Supported 00:17:53.220 Multi-Domain Subsystem: Not Supported 00:17:53.220 Fixed Capacity Management: Not Supported 00:17:53.220 Variable Capacity Management: Not Supported 00:17:53.220 Delete Endurance Group: Not Supported 00:17:53.221 Delete NVM Set: Not Supported 00:17:53.221 Extended LBA Formats Supported: Not Supported 00:17:53.221 Flexible Data Placement Supported: Not Supported 00:17:53.221 00:17:53.221 Controller Memory Buffer Support 00:17:53.221 ================================ 00:17:53.221 Supported: No 00:17:53.221 00:17:53.221 Persistent Memory Region Support 00:17:53.221 ================================ 00:17:53.221 Supported: No 00:17:53.221 00:17:53.221 Admin Command Set Attributes 00:17:53.221 ============================ 00:17:53.221 Security Send/Receive: Not Supported 00:17:53.221 Format NVM: Not Supported 00:17:53.221 Firmware Activate/Download: Not Supported 00:17:53.221 Namespace Management: Not Supported 00:17:53.221 Device Self-Test: Not Supported 00:17:53.221 Directives: Not Supported 00:17:53.221 NVMe-MI: Not Supported 00:17:53.221 Virtualization Management: Not Supported 00:17:53.221 Doorbell Buffer Config: Not Supported 00:17:53.221 Get LBA Status Capability: Not Supported 00:17:53.221 Command & Feature Lockdown Capability: Not Supported 00:17:53.221 Abort Command Limit: 4 00:17:53.221 Async Event Request Limit: 4 00:17:53.221 Number of Firmware Slots: N/A 00:17:53.221 Firmware Slot 1 Read-Only: N/A 00:17:53.221 Firmware Activation Wit[2024-11-20 11:45:26.075675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.075705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.075713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.075719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:53.221 hout Reset: N/A 00:17:53.221 Multiple Update Detection Support: N/A 00:17:53.221 Firmware Update Granularity: No Information Provided 00:17:53.221 Per-Namespace SMART Log: No 00:17:53.221 Asymmetric Namespace Access Log Page: Not Supported 00:17:53.221 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:53.221 Command Effects Log Page: Supported 00:17:53.221 Get Log Page Extended Data: Supported 00:17:53.221 Telemetry Log Pages: Not Supported 00:17:53.221 Persistent Event Log Pages: Not Supported 00:17:53.221 Supported Log Pages Log Page: May Support 00:17:53.221 Commands Supported & Effects Log Page: Not Supported 00:17:53.221 Feature Identifiers & Effects Log Page:May Support 00:17:53.221 NVMe-MI Commands & Effects Log Page: May Support 00:17:53.221 Data Area 4 for Telemetry Log: Not Supported 00:17:53.221 Error Log Page Entries Supported: 128 00:17:53.221 Keep Alive: Supported 00:17:53.221 Keep Alive Granularity: 10000 ms 00:17:53.221 00:17:53.221 NVM Command Set Attributes 00:17:53.221 ========================== 00:17:53.221 Submission Queue Entry Size 00:17:53.221 Max: 64 00:17:53.221 Min: 64 00:17:53.221 Completion Queue Entry Size 00:17:53.221 Max: 16 00:17:53.221 Min: 16 00:17:53.221 Number of Namespaces: 32 00:17:53.221 Compare Command: Supported 00:17:53.221 Write Uncorrectable Command: Not Supported 00:17:53.221 Dataset Management Command: Supported 00:17:53.221 Write Zeroes Command: Supported 00:17:53.221 Set Features Save Field: Not Supported 00:17:53.221 Reservations: Not Supported 00:17:53.221 Timestamp: Not Supported 00:17:53.221 Copy: Supported 00:17:53.221 Volatile Write Cache: Present 00:17:53.221 Atomic Write Unit (Normal): 1 00:17:53.221 Atomic Write Unit (PFail): 1 00:17:53.221 Atomic Compare & Write Unit: 1 00:17:53.221 Fused Compare & Write: Supported 00:17:53.221 Scatter-Gather List 00:17:53.221 SGL Command Set: Supported (Dword aligned) 00:17:53.221 SGL Keyed: Not Supported 00:17:53.221 SGL Bit Bucket Descriptor: Not Supported 00:17:53.221 SGL Metadata Pointer: Not Supported 00:17:53.221 Oversized SGL: Not Supported 00:17:53.221 SGL Metadata Address: Not Supported 00:17:53.221 SGL Offset: Not Supported 00:17:53.221 Transport SGL Data Block: Not Supported 00:17:53.221 Replay Protected Memory Block: Not Supported 00:17:53.221 00:17:53.221 Firmware Slot Information 00:17:53.221 ========================= 00:17:53.221 Active slot: 1 00:17:53.221 Slot 1 Firmware Revision: 24.01.1 00:17:53.221 00:17:53.221 00:17:53.221 Commands Supported and Effects 00:17:53.221 ============================== 00:17:53.221 Admin Commands 00:17:53.221 -------------- 00:17:53.221 Get Log Page (02h): Supported 00:17:53.221 Identify (06h): Supported 00:17:53.221 Abort (08h): Supported 00:17:53.221 Set Features (09h): Supported 00:17:53.221 Get Features (0Ah): Supported 00:17:53.221 Asynchronous Event Request (0Ch): Supported 00:17:53.221 Keep Alive (18h): Supported 00:17:53.221 I/O Commands 00:17:53.221 ------------ 00:17:53.221 Flush (00h): Supported LBA-Change 00:17:53.221 Write (01h): Supported LBA-Change 00:17:53.221 Read (02h): Supported 00:17:53.221 Compare (05h): Supported 00:17:53.221 Write Zeroes (08h): Supported LBA-Change 00:17:53.221 Dataset Management (09h): Supported LBA-Change 00:17:53.221 Copy (19h): Supported LBA-Change 00:17:53.221 Unknown (79h): Supported LBA-Change 00:17:53.221 Unknown (7Ah): Supported 00:17:53.221 00:17:53.221 Error Log 00:17:53.221 ========= 00:17:53.221 00:17:53.221 Arbitration 00:17:53.221 =========== 00:17:53.221 Arbitration Burst: 1 00:17:53.221 00:17:53.221 Power Management 00:17:53.221 ================ 00:17:53.221 Number of Power States: 1 00:17:53.221 Current Power State: Power State #0 00:17:53.221 Power State #0: 00:17:53.221 Max Power: 0.00 W 00:17:53.221 Non-Operational State: Operational 00:17:53.221 Entry Latency: Not Reported 00:17:53.221 Exit Latency: Not Reported 00:17:53.221 Relative Read Throughput: 0 00:17:53.221 Relative Read Latency: 0 00:17:53.221 Relative Write Throughput: 0 00:17:53.221 Relative Write Latency: 0 00:17:53.221 Idle Power: Not Reported 00:17:53.221 Active Power: Not Reported 00:17:53.221 Non-Operational Permissive Mode: Not Supported 00:17:53.221 00:17:53.221 Health Information 00:17:53.221 ================== 00:17:53.221 Critical Warnings: 00:17:53.221 Available Spare Space: OK 00:17:53.221 Temperature: OK 00:17:53.221 Device Reliability: OK 00:17:53.221 Read Only: No 00:17:53.221 Volatile Memory Backup: OK 00:17:53.221 Current Temperature: 0 Kelvin[2024-11-20 11:45:26.075812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:53.221 [2024-11-20 11:45:26.083676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.083716] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:53.221 [2024-11-20 11:45:26.083724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.083729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.083734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.083738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.221 [2024-11-20 11:45:26.083796] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:53.221 [2024-11-20 11:45:26.083807] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:53.221 [2024-11-20 11:45:26.084826] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:53.221 [2024-11-20 11:45:26.084838] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:53.221 [2024-11-20 11:45:26.085782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:53.221 [2024-11-20 11:45:26.085799] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:53.221 [2024-11-20 11:45:26.085935] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:53.221 [2024-11-20 11:45:26.086917] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:53.221 (-273 Celsius) 00:17:53.222 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:53.222 Available Spare: 0% 00:17:53.222 Available Spare Threshold: 0% 00:17:53.222 Life Percentage Used: 0% 00:17:53.222 Data Units Read: 0 00:17:53.222 Data Units Written: 0 00:17:53.222 Host Read Commands: 0 00:17:53.222 Host Write Commands: 0 00:17:53.222 Controller Busy Time: 0 minutes 00:17:53.222 Power Cycles: 0 00:17:53.222 Power On Hours: 0 hours 00:17:53.222 Unsafe Shutdowns: 0 00:17:53.222 Unrecoverable Media Errors: 0 00:17:53.222 Lifetime Error Log Entries: 0 00:17:53.222 Warning Temperature Time: 0 minutes 00:17:53.222 Critical Temperature Time: 0 minutes 00:17:53.222 00:17:53.222 Number of Queues 00:17:53.222 ================ 00:17:53.222 Number of I/O Submission Queues: 127 00:17:53.222 Number of I/O Completion Queues: 127 00:17:53.222 00:17:53.222 Active Namespaces 00:17:53.222 ================= 00:17:53.222 Namespace ID:1 00:17:53.222 Error Recovery Timeout: Unlimited 00:17:53.222 Command Set Identifier: NVM (00h) 00:17:53.222 Deallocate: Supported 00:17:53.222 Deallocated/Unwritten Error: Not Supported 00:17:53.222 Deallocated Read Value: Unknown 00:17:53.222 Deallocate in Write Zeroes: Not Supported 00:17:53.222 Deallocated Guard Field: 0xFFFF 00:17:53.222 Flush: Supported 00:17:53.222 Reservation: Supported 00:17:53.222 Namespace Sharing Capabilities: Multiple Controllers 00:17:53.222 Size (in LBAs): 131072 (0GiB) 00:17:53.222 Capacity (in LBAs): 131072 (0GiB) 00:17:53.222 Utilization (in LBAs): 131072 (0GiB) 00:17:53.222 NGUID: 500D6527E89742A7AB215288298D81F0 00:17:53.222 UUID: 500d6527-e897-42a7-ab21-5288298d81f0 00:17:53.222 Thin Provisioning: Not Supported 00:17:53.222 Per-NS Atomic Units: Yes 00:17:53.222 Atomic Boundary Size (Normal): 0 00:17:53.222 Atomic Boundary Size (PFail): 0 00:17:53.222 Atomic Boundary Offset: 0 00:17:53.222 Maximum Single Source Range Length: 65535 00:17:53.222 Maximum Copy Length: 65535 00:17:53.222 Maximum Source Range Count: 1 00:17:53.222 NGUID/EUI64 Never Reused: No 00:17:53.222 Namespace Write Protected: No 00:17:53.222 Number of LBA Formats: 1 00:17:53.222 Current LBA Format: LBA Format #00 00:17:53.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:53.222 00:17:53.222 11:45:26 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:58.504 Initializing NVMe Controllers 00:17:58.504 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:58.504 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:58.504 Initialization complete. Launching workers. 00:17:58.504 ======================================================== 00:17:58.504 Latency(us) 00:17:58.504 Device Information : IOPS MiB/s Average min max 00:17:58.504 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39270.91 153.40 3260.03 925.11 9760.29 00:17:58.504 ======================================================== 00:17:58.504 Total : 39270.91 153.40 3260.03 925.11 9760.29 00:17:58.504 00:17:58.504 11:45:31 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:03.776 Initializing NVMe Controllers 00:18:03.776 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:03.776 Initialization complete. Launching workers. 00:18:03.776 ======================================================== 00:18:03.777 Latency(us) 00:18:03.777 Device Information : IOPS MiB/s Average min max 00:18:03.777 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38390.84 149.96 3333.61 924.17 10226.61 00:18:03.777 ======================================================== 00:18:03.777 Total : 38390.84 149.96 3333.61 924.17 10226.61 00:18:03.777 00:18:03.777 11:45:36 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:10.368 Initializing NVMe Controllers 00:18:10.368 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:10.368 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:10.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:10.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:10.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:10.368 Initialization complete. Launching workers. 00:18:10.368 Starting thread on core 2 00:18:10.368 Starting thread on core 3 00:18:10.368 Starting thread on core 1 00:18:10.368 11:45:42 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:12.903 Initializing NVMe Controllers 00:18:12.903 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.903 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.903 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:12.903 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:12.903 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:12.903 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:12.903 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:18:12.903 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:12.903 Initialization complete. Launching workers. 00:18:12.903 Starting thread on core 1 with urgent priority queue 00:18:12.903 Starting thread on core 2 with urgent priority queue 00:18:12.903 Starting thread on core 3 with urgent priority queue 00:18:12.903 Starting thread on core 0 with urgent priority queue 00:18:12.903 SPDK bdev Controller (SPDK2 ) core 0: 4666.00 IO/s 21.43 secs/100000 ios 00:18:12.903 SPDK bdev Controller (SPDK2 ) core 1: 5893.00 IO/s 16.97 secs/100000 ios 00:18:12.903 SPDK bdev Controller (SPDK2 ) core 2: 4412.67 IO/s 22.66 secs/100000 ios 00:18:12.903 SPDK bdev Controller (SPDK2 ) core 3: 4772.33 IO/s 20.95 secs/100000 ios 00:18:12.903 ======================================================== 00:18:12.903 00:18:12.903 11:45:45 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:12.903 Initializing NVMe Controllers 00:18:12.903 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.903 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:12.903 Namespace ID: 1 size: 0GB 00:18:12.903 Initialization complete. 00:18:12.903 INFO: using host memory buffer for IO 00:18:12.903 Hello world! 00:18:12.903 11:45:45 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:14.281 Initializing NVMe Controllers 00:18:14.282 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:14.282 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:14.282 Initialization complete. Launching workers. 00:18:14.282 submit (in ns) avg, min, max = 5643.1, 3011.4, 4027909.2 00:18:14.282 complete (in ns) avg, min, max = 22160.7, 1655.0, 7020994.8 00:18:14.282 00:18:14.282 Submit histogram 00:18:14.282 ================ 00:18:14.282 Range in us Cumulative Count 00:18:14.282 3.004 - 3.018: 0.0118% ( 2) 00:18:14.282 3.018 - 3.032: 0.0944% ( 14) 00:18:14.282 3.032 - 3.046: 0.6257% ( 90) 00:18:14.282 3.046 - 3.060: 3.2346% ( 442) 00:18:14.282 3.060 - 3.074: 5.6251% ( 405) 00:18:14.282 3.074 - 3.088: 8.7534% ( 530) 00:18:14.282 3.088 - 3.102: 12.6372% ( 658) 00:18:14.282 3.102 - 3.116: 16.6686% ( 683) 00:18:14.282 3.116 - 3.130: 22.0753% ( 916) 00:18:14.282 3.130 - 3.144: 28.5090% ( 1090) 00:18:14.282 3.144 - 3.158: 33.2900% ( 810) 00:18:14.282 3.158 - 3.172: 37.8114% ( 766) 00:18:14.282 3.172 - 3.186: 42.7163% ( 831) 00:18:14.282 3.186 - 3.200: 47.5091% ( 812) 00:18:14.282 3.200 - 3.214: 52.9690% ( 925) 00:18:14.282 3.214 - 3.228: 57.0004% ( 683) 00:18:14.282 3.228 - 3.242: 59.8867% ( 489) 00:18:14.282 3.242 - 3.256: 62.0824% ( 372) 00:18:14.282 3.256 - 3.270: 64.4788% ( 406) 00:18:14.282 3.270 - 3.284: 66.7454% ( 384) 00:18:14.282 3.284 - 3.298: 68.3922% ( 279) 00:18:14.282 3.298 - 3.312: 69.7438% ( 229) 00:18:14.282 3.312 - 3.326: 71.1958% ( 246) 00:18:14.282 3.326 - 3.340: 72.7069% ( 256) 00:18:14.282 3.340 - 3.354: 74.9026% ( 372) 00:18:14.282 3.354 - 3.368: 76.4786% ( 267) 00:18:14.282 3.368 - 3.382: 77.9955% ( 257) 00:18:14.282 3.382 - 3.396: 79.7781% ( 302) 00:18:14.282 3.396 - 3.410: 81.8085% ( 344) 00:18:14.282 3.410 - 3.424: 83.5321% ( 292) 00:18:14.282 3.424 - 3.438: 85.4563% ( 326) 00:18:14.282 3.438 - 3.452: 87.1975% ( 295) 00:18:14.282 3.452 - 3.466: 89.0568% ( 315) 00:18:14.282 3.466 - 3.479: 90.3376% ( 217) 00:18:14.282 3.479 - 3.493: 91.4827% ( 194) 00:18:14.282 3.493 - 3.507: 92.3858% ( 153) 00:18:14.282 3.507 - 3.521: 93.0882% ( 119) 00:18:14.282 3.521 - 3.535: 93.8673% ( 132) 00:18:14.282 3.535 - 3.549: 94.7114% ( 143) 00:18:14.282 3.549 - 3.563: 95.4256% ( 121) 00:18:14.282 3.563 - 3.577: 96.0689% ( 109) 00:18:14.282 3.577 - 3.605: 97.1491% ( 183) 00:18:14.282 3.605 - 3.633: 97.9932% ( 143) 00:18:14.282 3.633 - 3.661: 98.6070% ( 104) 00:18:14.282 3.661 - 3.689: 98.9435% ( 57) 00:18:14.282 3.689 - 3.717: 99.1323% ( 32) 00:18:14.282 3.717 - 3.745: 99.2209% ( 15) 00:18:14.282 3.745 - 3.773: 99.2445% ( 4) 00:18:14.282 3.773 - 3.801: 99.2622% ( 3) 00:18:14.282 3.801 - 3.829: 99.2681% ( 1) 00:18:14.282 3.829 - 3.857: 99.3271% ( 10) 00:18:14.282 3.857 - 3.885: 99.3743% ( 8) 00:18:14.282 3.885 - 3.913: 99.4098% ( 6) 00:18:14.282 3.913 - 3.941: 99.4629% ( 9) 00:18:14.282 3.941 - 3.969: 99.5101% ( 8) 00:18:14.282 3.969 - 3.997: 99.5396% ( 5) 00:18:14.282 3.997 - 4.024: 99.5809% ( 7) 00:18:14.282 4.024 - 4.052: 99.5986% ( 3) 00:18:14.282 6.707 - 6.735: 99.6045% ( 1) 00:18:14.282 6.763 - 6.791: 99.6104% ( 1) 00:18:14.282 6.819 - 6.847: 99.6163% ( 1) 00:18:14.282 6.875 - 6.903: 99.6222% ( 1) 00:18:14.282 6.903 - 6.931: 99.6340% ( 2) 00:18:14.282 6.931 - 6.959: 99.6399% ( 1) 00:18:14.282 7.127 - 7.155: 99.6518% ( 2) 00:18:14.282 7.155 - 7.210: 99.6695% ( 3) 00:18:14.282 7.210 - 7.266: 99.6754% ( 1) 00:18:14.282 7.322 - 7.378: 99.6990% ( 4) 00:18:14.282 7.378 - 7.434: 99.7049% ( 1) 00:18:14.282 7.434 - 7.490: 99.7108% ( 1) 00:18:14.282 7.546 - 7.602: 99.7226% ( 2) 00:18:14.282 7.658 - 7.714: 99.7285% ( 1) 00:18:14.282 7.769 - 7.825: 99.7344% ( 1) 00:18:14.282 7.937 - 7.993: 99.7403% ( 1) 00:18:14.282 7.993 - 8.049: 99.7521% ( 2) 00:18:14.282 8.049 - 8.105: 99.7580% ( 1) 00:18:14.282 8.105 - 8.161: 99.7639% ( 1) 00:18:14.282 8.217 - 8.272: 99.7757% ( 2) 00:18:14.282 8.272 - 8.328: 99.7816% ( 1) 00:18:14.282 8.328 - 8.384: 99.8052% ( 4) 00:18:14.282 8.384 - 8.440: 99.8111% ( 1) 00:18:14.282 8.608 - 8.664: 99.8170% ( 1) 00:18:14.282 8.720 - 8.776: 99.8288% ( 2) 00:18:14.282 8.776 - 8.831: 99.8347% ( 1) 00:18:14.282 8.887 - 8.943: 99.8406% ( 1) 00:18:14.282 9.279 - 9.334: 99.8465% ( 1) 00:18:14.282 9.558 - 9.614: 99.8524% ( 1) 00:18:14.282 10.397 - 10.452: 99.8583% ( 1) 00:18:14.282 14.141 - 14.197: 99.8642% ( 1) 00:18:14.282 15.427 - 15.539: 99.8938% ( 5) 00:18:14.282 15.762 - 15.874: 99.9056% ( 2) 00:18:14.282 16.210 - 16.321: 99.9115% ( 1) 00:18:14.282 19.228 - 19.340: 99.9174% ( 1) 00:18:14.282 19.340 - 19.452: 99.9351% ( 3) 00:18:14.282 23.252 - 23.364: 99.9410% ( 1) 00:18:14.282 3977.949 - 4006.568: 99.9469% ( 1) 00:18:14.282 4006.568 - 4035.186: 100.0000% ( 9) 00:18:14.282 00:18:14.282 Complete histogram 00:18:14.282 ================== 00:18:14.282 Range in us Cumulative Count 00:18:14.282 1.649 - 1.656: 0.0059% ( 1) 00:18:14.282 1.656 - 1.663: 0.2066% ( 34) 00:18:14.282 1.663 - 1.670: 1.6645% ( 247) 00:18:14.282 1.670 - 1.677: 4.6925% ( 513) 00:18:14.282 1.677 - 1.684: 8.1572% ( 587) 00:18:14.282 1.684 - 1.691: 10.9668% ( 476) 00:18:14.282 1.691 - 1.698: 12.7081% ( 295) 00:18:14.282 1.698 - 1.705: 14.8920% ( 370) 00:18:14.282 1.705 - 1.712: 20.5584% ( 960) 00:18:14.282 1.712 - 1.719: 34.1400% ( 2301) 00:18:14.282 1.719 - 1.726: 50.3778% ( 2751) 00:18:14.282 1.726 - 1.733: 62.2713% ( 2015) 00:18:14.282 1.733 - 1.740: 68.9175% ( 1126) 00:18:14.282 1.740 - 1.747: 73.4683% ( 771) 00:18:14.282 1.747 - 1.754: 76.3782% ( 493) 00:18:14.282 1.754 - 1.761: 77.8243% ( 245) 00:18:14.282 1.761 - 1.768: 78.8868% ( 180) 00:18:14.282 1.768 - 1.775: 79.9374% ( 178) 00:18:14.282 1.775 - 1.782: 80.9231% ( 167) 00:18:14.282 1.782 - 1.789: 81.4839% ( 95) 00:18:14.282 1.789 - 1.803: 82.4106% ( 157) 00:18:14.282 1.803 - 1.817: 87.4572% ( 855) 00:18:14.282 1.817 - 1.831: 93.7847% ( 1072) 00:18:14.282 1.831 - 1.845: 96.3168% ( 429) 00:18:14.282 1.845 - 1.859: 97.4796% ( 197) 00:18:14.282 1.859 - 1.872: 97.7098% ( 39) 00:18:14.282 1.872 - 1.886: 97.7393% ( 5) 00:18:14.282 1.886 - 1.900: 97.7512% ( 2) 00:18:14.282 1.914 - 1.928: 97.7689% ( 3) 00:18:14.282 1.942 - 1.956: 97.7748% ( 1) 00:18:14.282 1.970 - 1.984: 97.7866% ( 2) 00:18:14.282 1.984 - 1.998: 97.8633% ( 13) 00:18:14.282 1.998 - 2.012: 98.0109% ( 25) 00:18:14.282 2.012 - 2.026: 98.0876% ( 13) 00:18:14.282 2.026 - 2.040: 98.2411% ( 26) 00:18:14.282 2.040 - 2.054: 98.8077% ( 96) 00:18:14.282 2.054 - 2.068: 99.1441% ( 57) 00:18:14.282 2.068 - 2.082: 99.1796% ( 6) 00:18:14.282 2.082 - 2.096: 99.2032% ( 4) 00:18:14.282 2.096 - 2.110: 99.2091% ( 1) 00:18:14.283 2.110 - 2.124: 99.2150% ( 1) 00:18:14.283 2.543 - 2.557: 99.2209% ( 1) 00:18:14.283 2.753 - 2.767: 99.2268% ( 1) 00:18:14.283 5.394 - 5.422: 99.2327% ( 1) 00:18:14.283 5.450 - 5.478: 99.2386% ( 1) 00:18:14.283 5.478 - 5.506: 99.2504% ( 2) 00:18:14.283 5.590 - 5.617: 99.2622% ( 2) 00:18:14.283 5.729 - 5.757: 99.2681% ( 1) 00:18:14.283 5.785 - 5.813: 99.2740% ( 1) 00:18:14.283 5.897 - 5.925: 99.2858% ( 2) 00:18:14.283 5.981 - 6.009: 99.3094% ( 4) 00:18:14.283 6.009 - 6.037: 99.3153% ( 1) 00:18:14.283 6.065 - 6.093: 99.3212% ( 1) 00:18:14.283 6.232 - 6.260: 99.3448% ( 4) 00:18:14.283 6.260 - 6.288: 99.3507% ( 1) 00:18:14.283 6.372 - 6.400: 99.3566% ( 1) 00:18:14.283 6.400 - 6.428: 99.3625% ( 1) 00:18:14.283 6.512 - 6.540: 99.3743% ( 2) 00:18:14.283 6.540 - 6.568: 99.3802% ( 1) 00:18:14.283 6.568 - 6.596: 99.3861% ( 1) 00:18:14.283 6.707 - 6.735: 99.3920% ( 1) 00:18:14.283 6.735 - 6.763: 99.3979% ( 1) 00:18:14.283 6.763 - 6.791: 99.4038% ( 1) 00:18:14.283 6.903 - 6.931: 99.4098% ( 1) 00:18:14.283 7.043 - 7.071: 99.4157% ( 1) 00:18:14.283 7.490 - 7.546: 99.4216% ( 1) 00:18:14.283 7.546 - 7.602: 99.4275% ( 1) 00:18:14.283 7.658 - 7.714: 99.4334% ( 1) 00:18:14.283 7.937 - 7.993: 99.4393% ( 1) 00:18:14.283 8.608 - 8.664: 99.4452% ( 1) 00:18:14.283 8.776 - 8.831: 99.4511% ( 1) 00:18:14.283 9.558 - 9.614: 99.4570% ( 1) 00:18:14.283 10.620 - 10.676: 99.4629% ( 1) 00:18:14.283 12.688 - 12.744: 99.4688% ( 1) 00:18:14.283 13.079 - 13.135: 99.4747% ( 1) 00:18:14.283 13.974 - 14.030: 99.4806% ( 1) 00:18:14.283 14.141 - 14.197: 99.4865% ( 1) 00:18:14.283 17.775 - 17.886: 99.4924% ( 1) 00:18:14.283 17.998 - 18.110: 99.5042% ( 2) 00:18:14.283 3019.235 - 3033.544: 99.5219% ( 3) 00:18:14.283 3577.293 - 3591.602: 99.5278% ( 1) 00:18:14.283 3949.331 - 3977.949: 99.5337% ( 1) 00:18:14.283 3977.949 - 4006.568: 99.5927% ( 10) 00:18:14.283 4006.568 - 4035.186: 99.9764% ( 65) 00:18:14.283 7011.493 - 7040.112: 100.0000% ( 4) 00:18:14.283 00:18:14.283 11:45:47 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:14.283 11:45:47 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:14.283 11:45:47 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:14.283 11:45:47 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:14.283 11:45:47 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:14.545 [ 00:18:14.545 { 00:18:14.545 "allow_any_host": true, 00:18:14.545 "hosts": [], 00:18:14.545 "listen_addresses": [], 00:18:14.545 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:14.545 "subtype": "Discovery" 00:18:14.545 }, 00:18:14.545 { 00:18:14.545 "allow_any_host": true, 00:18:14.545 "hosts": [], 00:18:14.545 "listen_addresses": [ 00:18:14.545 { 00:18:14.545 "adrfam": "IPv4", 00:18:14.545 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:14.545 "transport": "VFIOUSER", 00:18:14.545 "trsvcid": "0", 00:18:14.545 "trtype": "VFIOUSER" 00:18:14.545 } 00:18:14.545 ], 00:18:14.545 "max_cntlid": 65519, 00:18:14.545 "max_namespaces": 32, 00:18:14.545 "min_cntlid": 1, 00:18:14.545 "model_number": "SPDK bdev Controller", 00:18:14.545 "namespaces": [ 00:18:14.545 { 00:18:14.545 "bdev_name": "Malloc1", 00:18:14.545 "name": "Malloc1", 00:18:14.545 "nguid": "CEE245CF5C954669BC92CBD49658F3DD", 00:18:14.545 "nsid": 1, 00:18:14.545 "uuid": "cee245cf-5c95-4669-bc92-cbd49658f3dd" 00:18:14.545 }, 00:18:14.545 { 00:18:14.545 "bdev_name": "Malloc3", 00:18:14.545 "name": "Malloc3", 00:18:14.545 "nguid": "3E9BA66CD9E146EABD437AECC213A05E", 00:18:14.545 "nsid": 2, 00:18:14.545 "uuid": "3e9ba66c-d9e1-46ea-bd43-7aecc213a05e" 00:18:14.545 } 00:18:14.545 ], 00:18:14.545 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:14.545 "serial_number": "SPDK1", 00:18:14.545 "subtype": "NVMe" 00:18:14.545 }, 00:18:14.545 { 00:18:14.545 "allow_any_host": true, 00:18:14.545 "hosts": [], 00:18:14.545 "listen_addresses": [ 00:18:14.545 { 00:18:14.545 "adrfam": "IPv4", 00:18:14.545 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:14.545 "transport": "VFIOUSER", 00:18:14.545 "trsvcid": "0", 00:18:14.545 "trtype": "VFIOUSER" 00:18:14.545 } 00:18:14.545 ], 00:18:14.545 "max_cntlid": 65519, 00:18:14.545 "max_namespaces": 32, 00:18:14.545 "min_cntlid": 1, 00:18:14.545 "model_number": "SPDK bdev Controller", 00:18:14.545 "namespaces": [ 00:18:14.545 { 00:18:14.545 "bdev_name": "Malloc2", 00:18:14.545 "name": "Malloc2", 00:18:14.545 "nguid": "500D6527E89742A7AB215288298D81F0", 00:18:14.545 "nsid": 1, 00:18:14.545 "uuid": "500d6527-e897-42a7-ab21-5288298d81f0" 00:18:14.545 } 00:18:14.545 ], 00:18:14.545 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:14.545 "serial_number": "SPDK2", 00:18:14.545 "subtype": "NVMe" 00:18:14.545 } 00:18:14.545 ] 00:18:14.545 11:45:47 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:14.545 11:45:47 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71656 00:18:14.545 11:45:47 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:14.545 11:45:47 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:14.545 11:45:47 -- common/autotest_common.sh@1254 -- # local i=0 00:18:14.545 11:45:47 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.545 11:45:47 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:18:14.545 11:45:47 -- common/autotest_common.sh@1257 -- # i=1 00:18:14.545 11:45:47 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:18:14.819 11:45:47 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.819 11:45:47 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:18:14.819 11:45:47 -- common/autotest_common.sh@1257 -- # i=2 00:18:14.819 11:45:47 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:18:14.819 11:45:47 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.819 11:45:47 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.819 11:45:47 -- common/autotest_common.sh@1265 -- # return 0 00:18:14.819 11:45:47 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:14.819 11:45:47 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:15.078 Malloc4 00:18:15.078 11:45:47 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:15.338 11:45:48 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:15.338 Asynchronous Event Request test 00:18:15.338 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:15.338 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:15.338 Registering asynchronous event callbacks... 00:18:15.338 Starting namespace attribute notice tests for all controllers... 00:18:15.338 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:15.338 aer_cb - Changed Namespace 00:18:15.338 Cleaning up... 00:18:15.338 [ 00:18:15.338 { 00:18:15.338 "allow_any_host": true, 00:18:15.338 "hosts": [], 00:18:15.338 "listen_addresses": [], 00:18:15.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:15.338 "subtype": "Discovery" 00:18:15.338 }, 00:18:15.338 { 00:18:15.338 "allow_any_host": true, 00:18:15.338 "hosts": [], 00:18:15.338 "listen_addresses": [ 00:18:15.338 { 00:18:15.338 "adrfam": "IPv4", 00:18:15.338 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:15.338 "transport": "VFIOUSER", 00:18:15.338 "trsvcid": "0", 00:18:15.338 "trtype": "VFIOUSER" 00:18:15.338 } 00:18:15.338 ], 00:18:15.338 "max_cntlid": 65519, 00:18:15.338 "max_namespaces": 32, 00:18:15.338 "min_cntlid": 1, 00:18:15.338 "model_number": "SPDK bdev Controller", 00:18:15.338 "namespaces": [ 00:18:15.338 { 00:18:15.338 "bdev_name": "Malloc1", 00:18:15.338 "name": "Malloc1", 00:18:15.338 "nguid": "CEE245CF5C954669BC92CBD49658F3DD", 00:18:15.338 "nsid": 1, 00:18:15.338 "uuid": "cee245cf-5c95-4669-bc92-cbd49658f3dd" 00:18:15.338 }, 00:18:15.338 { 00:18:15.338 "bdev_name": "Malloc3", 00:18:15.338 "name": "Malloc3", 00:18:15.338 "nguid": "3E9BA66CD9E146EABD437AECC213A05E", 00:18:15.338 "nsid": 2, 00:18:15.338 "uuid": "3e9ba66c-d9e1-46ea-bd43-7aecc213a05e" 00:18:15.338 } 00:18:15.338 ], 00:18:15.338 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:15.338 "serial_number": "SPDK1", 00:18:15.338 "subtype": "NVMe" 00:18:15.338 }, 00:18:15.338 { 00:18:15.338 "allow_any_host": true, 00:18:15.338 "hosts": [], 00:18:15.338 "listen_addresses": [ 00:18:15.338 { 00:18:15.338 "adrfam": "IPv4", 00:18:15.338 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:15.338 "transport": "VFIOUSER", 00:18:15.338 "trsvcid": "0", 00:18:15.338 "trtype": "VFIOUSER" 00:18:15.338 } 00:18:15.338 ], 00:18:15.338 "max_cntlid": 65519, 00:18:15.338 "max_namespaces": 32, 00:18:15.338 "min_cntlid": 1, 00:18:15.338 "model_number": "SPDK bdev Controller", 00:18:15.338 "namespaces": [ 00:18:15.338 { 00:18:15.338 "bdev_name": "Malloc2", 00:18:15.338 "name": "Malloc2", 00:18:15.338 "nguid": "500D6527E89742A7AB215288298D81F0", 00:18:15.338 "nsid": 1, 00:18:15.338 "uuid": "500d6527-e897-42a7-ab21-5288298d81f0" 00:18:15.338 }, 00:18:15.338 { 00:18:15.338 "bdev_name": "Malloc4", 00:18:15.338 "name": "Malloc4", 00:18:15.338 "nguid": "9A5B5E719B1A49F0B3B10E5F98184FF7", 00:18:15.338 "nsid": 2, 00:18:15.338 "uuid": "9a5b5e71-9b1a-49f0-b3b1-0e5f98184ff7" 00:18:15.338 } 00:18:15.338 ], 00:18:15.338 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:15.338 "serial_number": "SPDK2", 00:18:15.338 "subtype": "NVMe" 00:18:15.338 } 00:18:15.338 ] 00:18:15.598 11:45:48 -- target/nvmf_vfio_user.sh@44 -- # wait 71656 00:18:15.598 11:45:48 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:15.598 11:45:48 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70983 00:18:15.598 11:45:48 -- common/autotest_common.sh@936 -- # '[' -z 70983 ']' 00:18:15.598 11:45:48 -- common/autotest_common.sh@940 -- # kill -0 70983 00:18:15.598 11:45:48 -- common/autotest_common.sh@941 -- # uname 00:18:15.598 11:45:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.598 11:45:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70983 00:18:15.598 killing process with pid 70983 00:18:15.598 11:45:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:15.598 11:45:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:15.598 11:45:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70983' 00:18:15.598 11:45:48 -- common/autotest_common.sh@955 -- # kill 70983 00:18:15.598 [2024-11-20 11:45:48.433880] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:15.598 11:45:48 -- common/autotest_common.sh@960 -- # wait 70983 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71704 00:18:15.858 Process pid: 71704 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71704' 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71704 00:18:15.858 11:45:48 -- common/autotest_common.sh@829 -- # '[' -z 71704 ']' 00:18:15.858 11:45:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.858 11:45:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.858 11:45:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.858 11:45:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.858 11:45:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.858 11:45:48 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:15.858 [2024-11-20 11:45:48.791691] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:15.858 [2024-11-20 11:45:48.792484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:15.858 [2024-11-20 11:45:48.792534] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.119 [2024-11-20 11:45:48.929324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.119 [2024-11-20 11:45:49.020745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.119 [2024-11-20 11:45:49.020869] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.119 [2024-11-20 11:45:49.020877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.119 [2024-11-20 11:45:49.020882] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.119 [2024-11-20 11:45:49.021101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.119 [2024-11-20 11:45:49.021366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.119 [2024-11-20 11:45:49.021459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.119 [2024-11-20 11:45:49.021477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.119 [2024-11-20 11:45:49.089462] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:18:16.119 [2024-11-20 11:45:49.097868] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:18:16.119 [2024-11-20 11:45:49.097989] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:18:16.119 [2024-11-20 11:45:49.098312] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:16.119 [2024-11-20 11:45:49.098431] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:18:16.694 11:45:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.694 11:45:49 -- common/autotest_common.sh@862 -- # return 0 00:18:16.694 11:45:49 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:17.633 11:45:50 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:17.893 11:45:50 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:17.893 11:45:50 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:17.893 11:45:50 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:17.893 11:45:50 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:17.893 11:45:50 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:18.152 Malloc1 00:18:18.152 11:45:51 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:18.411 11:45:51 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:18.671 11:45:51 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:18.930 11:45:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:18.930 11:45:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:18.930 11:45:51 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:19.189 Malloc2 00:18:19.189 11:45:52 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:19.447 11:45:52 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:19.706 11:45:52 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:19.965 11:45:52 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:19.965 11:45:52 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71704 00:18:19.965 11:45:52 -- common/autotest_common.sh@936 -- # '[' -z 71704 ']' 00:18:19.965 11:45:52 -- common/autotest_common.sh@940 -- # kill -0 71704 00:18:19.965 11:45:52 -- common/autotest_common.sh@941 -- # uname 00:18:19.965 11:45:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.965 11:45:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71704 00:18:19.965 11:45:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.965 11:45:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.965 11:45:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71704' 00:18:19.965 killing process with pid 71704 00:18:19.965 11:45:52 -- common/autotest_common.sh@955 -- # kill 71704 00:18:19.965 11:45:52 -- common/autotest_common.sh@960 -- # wait 71704 00:18:20.224 11:45:53 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:20.224 11:45:53 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:20.224 00:18:20.224 real 0m53.722s 00:18:20.224 user 3m31.449s 00:18:20.224 sys 0m3.668s 00:18:20.224 11:45:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:20.224 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:18:20.224 ************************************ 00:18:20.224 END TEST nvmf_vfio_user 00:18:20.224 ************************************ 00:18:20.225 11:45:53 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:20.225 11:45:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.225 11:45:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.225 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:18:20.225 ************************************ 00:18:20.225 START TEST nvmf_vfio_user_nvme_compliance 00:18:20.225 ************************************ 00:18:20.225 11:45:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:20.486 * Looking for test storage... 00:18:20.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:18:20.486 11:45:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:20.486 11:45:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:20.486 11:45:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:20.486 11:45:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:20.486 11:45:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:20.486 11:45:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:20.486 11:45:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:20.486 11:45:53 -- scripts/common.sh@335 -- # IFS=.-: 00:18:20.486 11:45:53 -- scripts/common.sh@335 -- # read -ra ver1 00:18:20.486 11:45:53 -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.486 11:45:53 -- scripts/common.sh@336 -- # read -ra ver2 00:18:20.486 11:45:53 -- scripts/common.sh@337 -- # local 'op=<' 00:18:20.486 11:45:53 -- scripts/common.sh@339 -- # ver1_l=2 00:18:20.486 11:45:53 -- scripts/common.sh@340 -- # ver2_l=1 00:18:20.486 11:45:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:20.486 11:45:53 -- scripts/common.sh@343 -- # case "$op" in 00:18:20.486 11:45:53 -- scripts/common.sh@344 -- # : 1 00:18:20.486 11:45:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:20.486 11:45:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.486 11:45:53 -- scripts/common.sh@364 -- # decimal 1 00:18:20.486 11:45:53 -- scripts/common.sh@352 -- # local d=1 00:18:20.486 11:45:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.486 11:45:53 -- scripts/common.sh@354 -- # echo 1 00:18:20.486 11:45:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:20.486 11:45:53 -- scripts/common.sh@365 -- # decimal 2 00:18:20.486 11:45:53 -- scripts/common.sh@352 -- # local d=2 00:18:20.486 11:45:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.486 11:45:53 -- scripts/common.sh@354 -- # echo 2 00:18:20.486 11:45:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:20.486 11:45:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:20.486 11:45:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:20.486 11:45:53 -- scripts/common.sh@367 -- # return 0 00:18:20.486 11:45:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.486 11:45:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:20.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.486 --rc genhtml_branch_coverage=1 00:18:20.486 --rc genhtml_function_coverage=1 00:18:20.486 --rc genhtml_legend=1 00:18:20.486 --rc geninfo_all_blocks=1 00:18:20.486 --rc geninfo_unexecuted_blocks=1 00:18:20.486 00:18:20.486 ' 00:18:20.486 11:45:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:20.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.486 --rc genhtml_branch_coverage=1 00:18:20.486 --rc genhtml_function_coverage=1 00:18:20.487 --rc genhtml_legend=1 00:18:20.487 --rc geninfo_all_blocks=1 00:18:20.487 --rc geninfo_unexecuted_blocks=1 00:18:20.487 00:18:20.487 ' 00:18:20.487 11:45:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:20.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.487 --rc genhtml_branch_coverage=1 00:18:20.487 --rc genhtml_function_coverage=1 00:18:20.487 --rc genhtml_legend=1 00:18:20.487 --rc geninfo_all_blocks=1 00:18:20.487 --rc geninfo_unexecuted_blocks=1 00:18:20.487 00:18:20.487 ' 00:18:20.487 11:45:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:20.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.487 --rc genhtml_branch_coverage=1 00:18:20.487 --rc genhtml_function_coverage=1 00:18:20.487 --rc genhtml_legend=1 00:18:20.487 --rc geninfo_all_blocks=1 00:18:20.487 --rc geninfo_unexecuted_blocks=1 00:18:20.487 00:18:20.487 ' 00:18:20.487 11:45:53 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.487 11:45:53 -- nvmf/common.sh@7 -- # uname -s 00:18:20.487 11:45:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.487 11:45:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.487 11:45:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.487 11:45:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.487 11:45:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.487 11:45:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.487 11:45:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.487 11:45:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.487 11:45:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.487 11:45:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.487 11:45:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:18:20.487 11:45:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:18:20.487 11:45:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.487 11:45:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.487 11:45:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.487 11:45:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.487 11:45:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.487 11:45:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.487 11:45:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.487 11:45:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.487 11:45:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.487 11:45:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.487 11:45:53 -- paths/export.sh@5 -- # export PATH 00:18:20.487 11:45:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.487 11:45:53 -- nvmf/common.sh@46 -- # : 0 00:18:20.487 11:45:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:20.487 11:45:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:20.487 11:45:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:20.487 11:45:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.487 11:45:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.487 11:45:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:20.487 11:45:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:20.487 11:45:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:20.487 11:45:53 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.487 11:45:53 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.487 11:45:53 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:20.487 11:45:53 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:20.487 11:45:53 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:20.487 11:45:53 -- compliance/compliance.sh@20 -- # nvmfpid=71898 00:18:20.487 Process pid: 71898 00:18:20.487 11:45:53 -- compliance/compliance.sh@21 -- # echo 'Process pid: 71898' 00:18:20.487 11:45:53 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:20.487 11:45:53 -- compliance/compliance.sh@24 -- # waitforlisten 71898 00:18:20.487 11:45:53 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:20.487 11:45:53 -- common/autotest_common.sh@829 -- # '[' -z 71898 ']' 00:18:20.487 11:45:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.487 11:45:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.487 11:45:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.487 11:45:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.487 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:18:20.487 [2024-11-20 11:45:53.494115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:20.487 [2024-11-20 11:45:53.494186] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.749 [2024-11-20 11:45:53.632448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:20.749 [2024-11-20 11:45:53.735772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:20.749 [2024-11-20 11:45:53.735919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.749 [2024-11-20 11:45:53.735928] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.749 [2024-11-20 11:45:53.735935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.749 [2024-11-20 11:45:53.736089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.749 [2024-11-20 11:45:53.736191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.749 [2024-11-20 11:45:53.736193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.685 11:45:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.685 11:45:54 -- common/autotest_common.sh@862 -- # return 0 00:18:21.685 11:45:54 -- compliance/compliance.sh@26 -- # sleep 1 00:18:22.622 11:45:55 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:22.622 11:45:55 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:22.622 11:45:55 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:22.622 11:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.622 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:18:22.622 11:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.622 11:45:55 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:22.622 11:45:55 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:22.622 11:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.622 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:18:22.622 malloc0 00:18:22.622 11:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.622 11:45:55 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:22.622 11:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.622 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:18:22.622 11:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.622 11:45:55 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:22.622 11:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.622 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:18:22.622 11:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.622 11:45:55 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:22.622 11:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.622 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:18:22.622 11:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.622 11:45:55 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:22.882 00:18:22.882 00:18:22.882 CUnit - A unit testing framework for C - Version 2.1-3 00:18:22.882 http://cunit.sourceforge.net/ 00:18:22.882 00:18:22.882 00:18:22.882 Suite: nvme_compliance 00:18:22.882 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 11:45:55.747437] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:22.882 [2024-11-20 11:45:55.747480] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:22.882 [2024-11-20 11:45:55.747487] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:22.882 passed 00:18:22.882 Test: admin_identify_ctrlr_verify_fused ...passed 00:18:23.141 Test: admin_identify_ns ...[2024-11-20 11:45:55.989678] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:23.141 [2024-11-20 11:45:55.997673] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:23.141 passed 00:18:23.141 Test: admin_get_features_mandatory_features ...passed 00:18:23.400 Test: admin_get_features_optional_features ...passed 00:18:23.400 Test: admin_set_features_number_of_queues ...passed 00:18:23.660 Test: admin_get_log_page_mandatory_logs ...passed 00:18:23.660 Test: admin_get_log_page_with_lpo ...[2024-11-20 11:45:56.633671] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:23.660 passed 00:18:23.919 Test: fabric_property_get ...passed 00:18:23.919 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 11:45:56.801622] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:23.919 passed 00:18:24.178 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 11:45:56.962672] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:24.178 [2024-11-20 11:45:56.978670] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:24.178 passed 00:18:24.178 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 11:45:57.071265] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:24.178 passed 00:18:24.437 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 11:45:57.235668] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:24.437 [2024-11-20 11:45:57.259667] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:24.437 passed 00:18:24.437 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 11:45:57.352047] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:24.437 [2024-11-20 11:45:57.352111] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:24.437 passed 00:18:24.696 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 11:45:57.530674] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:24.696 [2024-11-20 11:45:57.538667] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:24.696 [2024-11-20 11:45:57.546665] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:24.696 [2024-11-20 11:45:57.554667] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:24.696 passed 00:18:24.696 Test: admin_create_io_sq_verify_pc ...[2024-11-20 11:45:57.686684] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:24.954 passed 00:18:25.893 Test: admin_create_io_qp_max_qps ...[2024-11-20 11:45:58.885668] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:26.461 passed 00:18:26.461 Test: admin_create_io_sq_shared_cq ...[2024-11-20 11:45:59.491672] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:26.722 passed 00:18:26.722 00:18:26.722 Run Summary: Type Total Ran Passed Failed Inactive 00:18:26.722 suites 1 1 n/a 0 0 00:18:26.723 tests 18 18 18 0 0 00:18:26.723 asserts 360 360 360 0 n/a 00:18:26.723 00:18:26.723 Elapsed time = 1.568 seconds 00:18:26.723 11:45:59 -- compliance/compliance.sh@42 -- # killprocess 71898 00:18:26.723 11:45:59 -- common/autotest_common.sh@936 -- # '[' -z 71898 ']' 00:18:26.723 11:45:59 -- common/autotest_common.sh@940 -- # kill -0 71898 00:18:26.723 11:45:59 -- common/autotest_common.sh@941 -- # uname 00:18:26.723 11:45:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.723 11:45:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71898 00:18:26.723 11:45:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:26.723 11:45:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:26.723 killing process with pid 71898 00:18:26.723 11:45:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71898' 00:18:26.723 11:45:59 -- common/autotest_common.sh@955 -- # kill 71898 00:18:26.723 11:45:59 -- common/autotest_common.sh@960 -- # wait 71898 00:18:26.990 11:45:59 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:26.990 11:45:59 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:26.990 ************************************ 00:18:26.990 END TEST nvmf_vfio_user_nvme_compliance 00:18:26.990 ************************************ 00:18:26.990 00:18:26.990 real 0m6.708s 00:18:26.990 user 0m18.441s 00:18:26.990 sys 0m0.622s 00:18:26.990 11:45:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:26.990 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:18:26.990 11:45:59 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:26.990 11:45:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:26.990 11:45:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.990 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:18:26.990 ************************************ 00:18:26.990 START TEST nvmf_vfio_user_fuzz 00:18:26.990 ************************************ 00:18:26.990 11:45:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:26.990 * Looking for test storage... 00:18:26.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:26.990 11:46:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:26.990 11:46:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:26.990 11:46:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:27.249 11:46:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:27.249 11:46:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:27.249 11:46:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:27.249 11:46:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:27.249 11:46:00 -- scripts/common.sh@335 -- # IFS=.-: 00:18:27.249 11:46:00 -- scripts/common.sh@335 -- # read -ra ver1 00:18:27.249 11:46:00 -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.249 11:46:00 -- scripts/common.sh@336 -- # read -ra ver2 00:18:27.249 11:46:00 -- scripts/common.sh@337 -- # local 'op=<' 00:18:27.249 11:46:00 -- scripts/common.sh@339 -- # ver1_l=2 00:18:27.249 11:46:00 -- scripts/common.sh@340 -- # ver2_l=1 00:18:27.249 11:46:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:27.249 11:46:00 -- scripts/common.sh@343 -- # case "$op" in 00:18:27.249 11:46:00 -- scripts/common.sh@344 -- # : 1 00:18:27.249 11:46:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:27.249 11:46:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.249 11:46:00 -- scripts/common.sh@364 -- # decimal 1 00:18:27.249 11:46:00 -- scripts/common.sh@352 -- # local d=1 00:18:27.249 11:46:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.249 11:46:00 -- scripts/common.sh@354 -- # echo 1 00:18:27.249 11:46:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:27.249 11:46:00 -- scripts/common.sh@365 -- # decimal 2 00:18:27.249 11:46:00 -- scripts/common.sh@352 -- # local d=2 00:18:27.249 11:46:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.249 11:46:00 -- scripts/common.sh@354 -- # echo 2 00:18:27.249 11:46:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:27.249 11:46:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:27.249 11:46:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:27.249 11:46:00 -- scripts/common.sh@367 -- # return 0 00:18:27.249 11:46:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.249 11:46:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.249 --rc genhtml_branch_coverage=1 00:18:27.249 --rc genhtml_function_coverage=1 00:18:27.249 --rc genhtml_legend=1 00:18:27.249 --rc geninfo_all_blocks=1 00:18:27.249 --rc geninfo_unexecuted_blocks=1 00:18:27.249 00:18:27.249 ' 00:18:27.249 11:46:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.249 --rc genhtml_branch_coverage=1 00:18:27.249 --rc genhtml_function_coverage=1 00:18:27.249 --rc genhtml_legend=1 00:18:27.249 --rc geninfo_all_blocks=1 00:18:27.249 --rc geninfo_unexecuted_blocks=1 00:18:27.249 00:18:27.249 ' 00:18:27.249 11:46:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.249 --rc genhtml_branch_coverage=1 00:18:27.249 --rc genhtml_function_coverage=1 00:18:27.249 --rc genhtml_legend=1 00:18:27.249 --rc geninfo_all_blocks=1 00:18:27.249 --rc geninfo_unexecuted_blocks=1 00:18:27.249 00:18:27.249 ' 00:18:27.249 11:46:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:27.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.249 --rc genhtml_branch_coverage=1 00:18:27.249 --rc genhtml_function_coverage=1 00:18:27.249 --rc genhtml_legend=1 00:18:27.249 --rc geninfo_all_blocks=1 00:18:27.249 --rc geninfo_unexecuted_blocks=1 00:18:27.249 00:18:27.249 ' 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:27.249 11:46:00 -- nvmf/common.sh@7 -- # uname -s 00:18:27.249 11:46:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.249 11:46:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.249 11:46:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.249 11:46:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.249 11:46:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.249 11:46:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.249 11:46:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.249 11:46:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.249 11:46:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.249 11:46:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.249 11:46:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:18:27.249 11:46:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:18:27.249 11:46:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.249 11:46:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.249 11:46:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:27.249 11:46:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:27.249 11:46:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.249 11:46:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.249 11:46:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.249 11:46:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.249 11:46:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.249 11:46:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.249 11:46:00 -- paths/export.sh@5 -- # export PATH 00:18:27.249 11:46:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.249 11:46:00 -- nvmf/common.sh@46 -- # : 0 00:18:27.249 11:46:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:27.249 11:46:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:27.249 11:46:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:27.249 11:46:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.249 11:46:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.249 11:46:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:27.249 11:46:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:27.249 11:46:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=72058 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 72058' 00:18:27.249 Process pid: 72058 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:27.249 11:46:00 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 72058 00:18:27.249 11:46:00 -- common/autotest_common.sh@829 -- # '[' -z 72058 ']' 00:18:27.249 11:46:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.249 11:46:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.249 11:46:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.249 11:46:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.249 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:18:28.206 11:46:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.206 11:46:01 -- common/autotest_common.sh@862 -- # return 0 00:18:28.206 11:46:01 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:29.144 11:46:02 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:29.144 11:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.144 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.144 11:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.145 11:46:02 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:29.145 11:46:02 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:29.145 11:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.145 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.405 malloc0 00:18:29.405 11:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.405 11:46:02 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:29.405 11:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.405 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.405 11:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.405 11:46:02 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:29.405 11:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.405 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.405 11:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.405 11:46:02 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:29.405 11:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.405 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.405 11:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.405 11:46:02 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:29.405 11:46:02 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:29.665 Shutting down the fuzz application 00:18:29.665 11:46:02 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:29.665 11:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.665 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.665 11:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.665 11:46:02 -- target/vfio_user_fuzz.sh@46 -- # killprocess 72058 00:18:29.665 11:46:02 -- common/autotest_common.sh@936 -- # '[' -z 72058 ']' 00:18:29.665 11:46:02 -- common/autotest_common.sh@940 -- # kill -0 72058 00:18:29.665 11:46:02 -- common/autotest_common.sh@941 -- # uname 00:18:29.665 11:46:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.665 11:46:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72058 00:18:29.665 11:46:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:29.665 11:46:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:29.665 killing process with pid 72058 00:18:29.665 11:46:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72058' 00:18:29.665 11:46:02 -- common/autotest_common.sh@955 -- # kill 72058 00:18:29.665 11:46:02 -- common/autotest_common.sh@960 -- # wait 72058 00:18:29.924 11:46:02 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:29.924 11:46:02 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:29.924 00:18:29.924 real 0m2.980s 00:18:29.924 user 0m3.189s 00:18:29.924 sys 0m0.444s 00:18:29.924 11:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:29.924 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.925 ************************************ 00:18:29.925 END TEST nvmf_vfio_user_fuzz 00:18:29.925 ************************************ 00:18:29.925 11:46:02 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:29.925 11:46:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:29.925 11:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.925 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:30.185 ************************************ 00:18:30.185 START TEST nvmf_host_management 00:18:30.185 ************************************ 00:18:30.185 11:46:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:30.185 * Looking for test storage... 00:18:30.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:30.185 11:46:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:30.185 11:46:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:30.185 11:46:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:30.185 11:46:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:30.185 11:46:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:30.185 11:46:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:30.185 11:46:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:30.185 11:46:03 -- scripts/common.sh@335 -- # IFS=.-: 00:18:30.185 11:46:03 -- scripts/common.sh@335 -- # read -ra ver1 00:18:30.185 11:46:03 -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.185 11:46:03 -- scripts/common.sh@336 -- # read -ra ver2 00:18:30.185 11:46:03 -- scripts/common.sh@337 -- # local 'op=<' 00:18:30.185 11:46:03 -- scripts/common.sh@339 -- # ver1_l=2 00:18:30.185 11:46:03 -- scripts/common.sh@340 -- # ver2_l=1 00:18:30.185 11:46:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:30.185 11:46:03 -- scripts/common.sh@343 -- # case "$op" in 00:18:30.185 11:46:03 -- scripts/common.sh@344 -- # : 1 00:18:30.185 11:46:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:30.185 11:46:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.185 11:46:03 -- scripts/common.sh@364 -- # decimal 1 00:18:30.185 11:46:03 -- scripts/common.sh@352 -- # local d=1 00:18:30.185 11:46:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.185 11:46:03 -- scripts/common.sh@354 -- # echo 1 00:18:30.185 11:46:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:30.185 11:46:03 -- scripts/common.sh@365 -- # decimal 2 00:18:30.185 11:46:03 -- scripts/common.sh@352 -- # local d=2 00:18:30.185 11:46:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.185 11:46:03 -- scripts/common.sh@354 -- # echo 2 00:18:30.185 11:46:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:30.185 11:46:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:30.185 11:46:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:30.185 11:46:03 -- scripts/common.sh@367 -- # return 0 00:18:30.185 11:46:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.185 11:46:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:30.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.185 --rc genhtml_branch_coverage=1 00:18:30.185 --rc genhtml_function_coverage=1 00:18:30.185 --rc genhtml_legend=1 00:18:30.185 --rc geninfo_all_blocks=1 00:18:30.185 --rc geninfo_unexecuted_blocks=1 00:18:30.185 00:18:30.185 ' 00:18:30.185 11:46:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:30.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.185 --rc genhtml_branch_coverage=1 00:18:30.185 --rc genhtml_function_coverage=1 00:18:30.185 --rc genhtml_legend=1 00:18:30.185 --rc geninfo_all_blocks=1 00:18:30.185 --rc geninfo_unexecuted_blocks=1 00:18:30.185 00:18:30.185 ' 00:18:30.185 11:46:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:30.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.185 --rc genhtml_branch_coverage=1 00:18:30.185 --rc genhtml_function_coverage=1 00:18:30.185 --rc genhtml_legend=1 00:18:30.185 --rc geninfo_all_blocks=1 00:18:30.185 --rc geninfo_unexecuted_blocks=1 00:18:30.185 00:18:30.185 ' 00:18:30.185 11:46:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:30.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.185 --rc genhtml_branch_coverage=1 00:18:30.185 --rc genhtml_function_coverage=1 00:18:30.185 --rc genhtml_legend=1 00:18:30.185 --rc geninfo_all_blocks=1 00:18:30.185 --rc geninfo_unexecuted_blocks=1 00:18:30.185 00:18:30.185 ' 00:18:30.185 11:46:03 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:30.185 11:46:03 -- nvmf/common.sh@7 -- # uname -s 00:18:30.185 11:46:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.185 11:46:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.185 11:46:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.185 11:46:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.185 11:46:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.185 11:46:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.185 11:46:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.186 11:46:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.186 11:46:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.186 11:46:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.186 11:46:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:18:30.186 11:46:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:18:30.186 11:46:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.186 11:46:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.186 11:46:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:30.186 11:46:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:30.186 11:46:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.186 11:46:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.186 11:46:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.186 11:46:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.186 11:46:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.186 11:46:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.186 11:46:03 -- paths/export.sh@5 -- # export PATH 00:18:30.186 11:46:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.186 11:46:03 -- nvmf/common.sh@46 -- # : 0 00:18:30.186 11:46:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:30.186 11:46:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:30.186 11:46:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:30.186 11:46:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.186 11:46:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.186 11:46:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:30.186 11:46:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:30.186 11:46:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:30.186 11:46:03 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.186 11:46:03 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.186 11:46:03 -- target/host_management.sh@104 -- # nvmftestinit 00:18:30.186 11:46:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:30.186 11:46:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.447 11:46:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:30.447 11:46:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:30.447 11:46:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:30.447 11:46:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.447 11:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.447 11:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.447 11:46:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:30.447 11:46:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:30.447 11:46:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:30.447 11:46:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:30.447 11:46:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:30.447 11:46:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:30.447 11:46:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.447 11:46:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.447 11:46:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:30.447 11:46:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:30.447 11:46:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:30.447 11:46:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:30.447 11:46:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:30.447 11:46:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.447 11:46:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:30.447 11:46:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:30.447 11:46:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:30.447 11:46:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:30.447 11:46:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:30.447 11:46:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:30.447 Cannot find device "nvmf_tgt_br" 00:18:30.447 11:46:03 -- nvmf/common.sh@154 -- # true 00:18:30.447 11:46:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.447 Cannot find device "nvmf_tgt_br2" 00:18:30.447 11:46:03 -- nvmf/common.sh@155 -- # true 00:18:30.447 11:46:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:30.447 11:46:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:30.447 Cannot find device "nvmf_tgt_br" 00:18:30.447 11:46:03 -- nvmf/common.sh@157 -- # true 00:18:30.447 11:46:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:30.447 Cannot find device "nvmf_tgt_br2" 00:18:30.447 11:46:03 -- nvmf/common.sh@158 -- # true 00:18:30.447 11:46:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:30.447 11:46:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:30.447 11:46:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.447 11:46:03 -- nvmf/common.sh@161 -- # true 00:18:30.447 11:46:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.447 11:46:03 -- nvmf/common.sh@162 -- # true 00:18:30.447 11:46:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:30.447 11:46:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:30.447 11:46:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:30.447 11:46:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:30.447 11:46:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:30.447 11:46:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:30.447 11:46:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:30.447 11:46:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:30.447 11:46:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:30.447 11:46:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:30.447 11:46:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:30.447 11:46:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:30.447 11:46:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:30.447 11:46:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.447 11:46:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.707 11:46:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.707 11:46:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:30.707 11:46:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:30.707 11:46:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.707 11:46:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.708 11:46:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.708 11:46:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.708 11:46:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.708 11:46:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:30.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:18:30.708 00:18:30.708 --- 10.0.0.2 ping statistics --- 00:18:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.708 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:18:30.708 11:46:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:30.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:30.708 00:18:30.708 --- 10.0.0.3 ping statistics --- 00:18:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.708 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:30.708 11:46:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:30.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:18:30.708 00:18:30.708 --- 10.0.0.1 ping statistics --- 00:18:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.708 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:18:30.708 11:46:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.708 11:46:03 -- nvmf/common.sh@421 -- # return 0 00:18:30.708 11:46:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:30.708 11:46:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.708 11:46:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:30.708 11:46:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:30.708 11:46:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.708 11:46:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:30.708 11:46:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:30.708 11:46:03 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:18:30.708 11:46:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:30.708 11:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:30.708 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:18:30.708 ************************************ 00:18:30.708 START TEST nvmf_host_management 00:18:30.708 ************************************ 00:18:30.708 11:46:03 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:18:30.708 11:46:03 -- target/host_management.sh@69 -- # starttarget 00:18:30.708 11:46:03 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:30.708 11:46:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:30.708 11:46:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.708 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:18:30.708 11:46:03 -- nvmf/common.sh@469 -- # nvmfpid=72296 00:18:30.708 11:46:03 -- nvmf/common.sh@470 -- # waitforlisten 72296 00:18:30.708 11:46:03 -- common/autotest_common.sh@829 -- # '[' -z 72296 ']' 00:18:30.708 11:46:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.708 11:46:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.708 11:46:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.708 11:46:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.708 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:18:30.708 11:46:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:30.708 [2024-11-20 11:46:03.650086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:30.708 [2024-11-20 11:46:03.650154] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.967 [2024-11-20 11:46:03.773121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.967 [2024-11-20 11:46:03.865868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:30.967 [2024-11-20 11:46:03.866008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.967 [2024-11-20 11:46:03.866015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.967 [2024-11-20 11:46:03.866021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.967 [2024-11-20 11:46:03.866238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.967 [2024-11-20 11:46:03.866440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.967 [2024-11-20 11:46:03.866619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.967 [2024-11-20 11:46:03.866622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:31.536 11:46:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.536 11:46:04 -- common/autotest_common.sh@862 -- # return 0 00:18:31.536 11:46:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:31.536 11:46:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.536 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.796 11:46:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.796 11:46:04 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.796 11:46:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.796 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.796 [2024-11-20 11:46:04.590098] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.796 11:46:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.796 11:46:04 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:31.796 11:46:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.796 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.796 11:46:04 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:18:31.796 11:46:04 -- target/host_management.sh@23 -- # cat 00:18:31.796 11:46:04 -- target/host_management.sh@30 -- # rpc_cmd 00:18:31.796 11:46:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.796 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.796 Malloc0 00:18:31.796 [2024-11-20 11:46:04.678914] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.796 11:46:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.796 11:46:04 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:31.796 11:46:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.796 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.796 11:46:04 -- target/host_management.sh@73 -- # perfpid=72368 00:18:31.796 11:46:04 -- target/host_management.sh@74 -- # waitforlisten 72368 /var/tmp/bdevperf.sock 00:18:31.796 11:46:04 -- common/autotest_common.sh@829 -- # '[' -z 72368 ']' 00:18:31.796 11:46:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.796 11:46:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.796 11:46:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.796 11:46:04 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:31.796 11:46:04 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:31.796 11:46:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.796 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.796 11:46:04 -- nvmf/common.sh@520 -- # config=() 00:18:31.796 11:46:04 -- nvmf/common.sh@520 -- # local subsystem config 00:18:31.796 11:46:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:31.796 11:46:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:31.796 { 00:18:31.796 "params": { 00:18:31.796 "name": "Nvme$subsystem", 00:18:31.796 "trtype": "$TEST_TRANSPORT", 00:18:31.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:31.796 "adrfam": "ipv4", 00:18:31.796 "trsvcid": "$NVMF_PORT", 00:18:31.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:31.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:31.796 "hdgst": ${hdgst:-false}, 00:18:31.796 "ddgst": ${ddgst:-false} 00:18:31.796 }, 00:18:31.796 "method": "bdev_nvme_attach_controller" 00:18:31.796 } 00:18:31.796 EOF 00:18:31.796 )") 00:18:31.796 11:46:04 -- nvmf/common.sh@542 -- # cat 00:18:31.796 11:46:04 -- nvmf/common.sh@544 -- # jq . 00:18:31.796 11:46:04 -- nvmf/common.sh@545 -- # IFS=, 00:18:31.796 11:46:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:31.796 "params": { 00:18:31.796 "name": "Nvme0", 00:18:31.796 "trtype": "tcp", 00:18:31.796 "traddr": "10.0.0.2", 00:18:31.796 "adrfam": "ipv4", 00:18:31.796 "trsvcid": "4420", 00:18:31.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:31.796 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:31.796 "hdgst": false, 00:18:31.796 "ddgst": false 00:18:31.796 }, 00:18:31.796 "method": "bdev_nvme_attach_controller" 00:18:31.796 }' 00:18:31.796 [2024-11-20 11:46:04.796214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:31.796 [2024-11-20 11:46:04.796273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72368 ] 00:18:32.055 [2024-11-20 11:46:04.932716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.055 [2024-11-20 11:46:05.019239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.366 Running I/O for 10 seconds... 00:18:32.936 11:46:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.937 11:46:05 -- common/autotest_common.sh@862 -- # return 0 00:18:32.937 11:46:05 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:32.937 11:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.937 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:18:32.937 11:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.937 11:46:05 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.937 11:46:05 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:32.937 11:46:05 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:32.937 11:46:05 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:32.937 11:46:05 -- target/host_management.sh@52 -- # local ret=1 00:18:32.937 11:46:05 -- target/host_management.sh@53 -- # local i 00:18:32.937 11:46:05 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:32.937 11:46:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:32.937 11:46:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:32.937 11:46:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:32.937 11:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.937 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:18:32.937 11:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.937 11:46:05 -- target/host_management.sh@55 -- # read_io_count=2521 00:18:32.937 11:46:05 -- target/host_management.sh@58 -- # '[' 2521 -ge 100 ']' 00:18:32.937 11:46:05 -- target/host_management.sh@59 -- # ret=0 00:18:32.937 11:46:05 -- target/host_management.sh@60 -- # break 00:18:32.937 11:46:05 -- target/host_management.sh@64 -- # return 0 00:18:32.937 11:46:05 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:32.937 11:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.937 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:18:32.937 [2024-11-20 11:46:05.747545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.747701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.747748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.747821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.747862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.747906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.747961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 task offset: 88448 on job bdev=Nvme0n1 fails 00:18:32.937 00:18:32.937 Latency(us) 00:18:32.937 [2024-11-20T11:46:05.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.937 [2024-11-20T11:46:05.980Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:32.937 [2024-11-20T11:46:05.980Z] Job: Nvme0n1 ended in about 0.58 seconds with error 00:18:32.937 Verification LBA range: start 0x0 length 0x400 00:18:32.937 Nvme0n1 : 0.58 4675.93 292.25 110.02 0.00 13169.61 3334.04 18086.79 00:18:32.937 [2024-11-20T11:46:05.980Z] =================================================================================================================== 00:18:32.937 [2024-11-20T11:46:05.980Z] Total : 4675.93 292.25 110.02 0.00 13169.61 3334.04 18086.79 00:18:32.937 [2024-11-20 11:46:05.748759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1910 is same with the state(5) to be set 00:18:32.937 [2024-11-20 11:46:05.748944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.937 [2024-11-20 11:46:05.748972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.937 [2024-11-20 11:46:05.748990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.937 [2024-11-20 11:46:05.748998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.937 [2024-11-20 11:46:05.749007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.937 [2024-11-20 11:46:05.749013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.937 [2024-11-20 11:46:05.749022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.937 [2024-11-20 11:46:05.749028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.937 [2024-11-20 11:46:05.749036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.937 [2024-11-20 11:46:05.749043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.937 [2024-11-20 11:46:05.749051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.937 [2024-11-20 11:46:05.749059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.937 [2024-11-20 11:46:05.749068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.938 [2024-11-20 11:46:05.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.938 [2024-11-20 11:46:05.749587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.939 [2024-11-20 11:46:05.749884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.749890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912400 is same with the state(5) to be set 00:18:32.939 [2024-11-20 11:46:05.749941] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1912400 was disconnected and freed. reset controller. 00:18:32.939 [2024-11-20 11:46:05.750914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:32.939 [2024-11-20 11:46:05.752606] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:32.939 [2024-11-20 11:46:05.752624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193edc0 (9): Bad file descriptor 00:18:32.939 11:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.939 11:46:05 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:32.939 11:46:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.939 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:18:32.939 [2024-11-20 11:46:05.757999] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:18:32.939 [2024-11-20 11:46:05.758173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:32.939 [2024-11-20 11:46:05.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.939 [2024-11-20 11:46:05.758366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:18:32.939 [2024-11-20 11:46:05.758418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:18:32.939 [2024-11-20 11:46:05.758457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:18:32.939 [2024-11-20 11:46:05.758493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x193edc0 00:18:32.939 [2024-11-20 11:46:05.758540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193edc0 (9): Bad file descriptor 00:18:32.939 [2024-11-20 11:46:05.758585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:32.939 [2024-11-20 11:46:05.758626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:32.939 [2024-11-20 11:46:05.758735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:32.939 [2024-11-20 11:46:05.758772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.939 11:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.939 11:46:05 -- target/host_management.sh@87 -- # sleep 1 00:18:33.877 11:46:06 -- target/host_management.sh@91 -- # kill -9 72368 00:18:33.877 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72368) - No such process 00:18:33.877 11:46:06 -- target/host_management.sh@91 -- # true 00:18:33.877 11:46:06 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:33.878 11:46:06 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:33.878 11:46:06 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:33.878 11:46:06 -- nvmf/common.sh@520 -- # config=() 00:18:33.878 11:46:06 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.878 11:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.878 11:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.878 { 00:18:33.878 "params": { 00:18:33.878 "name": "Nvme$subsystem", 00:18:33.878 "trtype": "$TEST_TRANSPORT", 00:18:33.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.878 "adrfam": "ipv4", 00:18:33.878 "trsvcid": "$NVMF_PORT", 00:18:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.878 "hdgst": ${hdgst:-false}, 00:18:33.878 "ddgst": ${ddgst:-false} 00:18:33.878 }, 00:18:33.878 "method": "bdev_nvme_attach_controller" 00:18:33.878 } 00:18:33.878 EOF 00:18:33.878 )") 00:18:33.878 11:46:06 -- nvmf/common.sh@542 -- # cat 00:18:33.878 11:46:06 -- nvmf/common.sh@544 -- # jq . 00:18:33.878 11:46:06 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.878 11:46:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.878 "params": { 00:18:33.878 "name": "Nvme0", 00:18:33.878 "trtype": "tcp", 00:18:33.878 "traddr": "10.0.0.2", 00:18:33.878 "adrfam": "ipv4", 00:18:33.878 "trsvcid": "4420", 00:18:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:33.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:33.878 "hdgst": false, 00:18:33.878 "ddgst": false 00:18:33.878 }, 00:18:33.878 "method": "bdev_nvme_attach_controller" 00:18:33.878 }' 00:18:33.878 [2024-11-20 11:46:06.830413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:33.878 [2024-11-20 11:46:06.830581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:18:34.137 [2024-11-20 11:46:06.967850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.137 [2024-11-20 11:46:07.065534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.395 Running I/O for 1 seconds... 00:18:35.331 00:18:35.331 Latency(us) 00:18:35.331 [2024-11-20T11:46:08.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.331 [2024-11-20T11:46:08.374Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.331 Verification LBA range: start 0x0 length 0x400 00:18:35.331 Nvme0n1 : 1.00 4731.79 295.74 0.00 0.00 13321.69 747.65 19918.37 00:18:35.331 [2024-11-20T11:46:08.374Z] =================================================================================================================== 00:18:35.331 [2024-11-20T11:46:08.374Z] Total : 4731.79 295.74 0.00 0.00 13321.69 747.65 19918.37 00:18:35.590 11:46:08 -- target/host_management.sh@101 -- # stoptarget 00:18:35.590 11:46:08 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:35.590 11:46:08 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:18:35.590 11:46:08 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:18:35.590 11:46:08 -- target/host_management.sh@40 -- # nvmftestfini 00:18:35.590 11:46:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.590 11:46:08 -- nvmf/common.sh@116 -- # sync 00:18:35.590 11:46:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.590 11:46:08 -- nvmf/common.sh@119 -- # set +e 00:18:35.590 11:46:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.590 11:46:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.590 rmmod nvme_tcp 00:18:35.590 rmmod nvme_fabrics 00:18:35.590 rmmod nvme_keyring 00:18:35.590 11:46:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.590 11:46:08 -- nvmf/common.sh@123 -- # set -e 00:18:35.590 11:46:08 -- nvmf/common.sh@124 -- # return 0 00:18:35.590 11:46:08 -- nvmf/common.sh@477 -- # '[' -n 72296 ']' 00:18:35.590 11:46:08 -- nvmf/common.sh@478 -- # killprocess 72296 00:18:35.590 11:46:08 -- common/autotest_common.sh@936 -- # '[' -z 72296 ']' 00:18:35.590 11:46:08 -- common/autotest_common.sh@940 -- # kill -0 72296 00:18:35.590 11:46:08 -- common/autotest_common.sh@941 -- # uname 00:18:35.590 11:46:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.590 11:46:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72296 00:18:35.590 11:46:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:35.590 11:46:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:35.590 11:46:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72296' 00:18:35.590 killing process with pid 72296 00:18:35.590 11:46:08 -- common/autotest_common.sh@955 -- # kill 72296 00:18:35.590 11:46:08 -- common/autotest_common.sh@960 -- # wait 72296 00:18:35.849 [2024-11-20 11:46:08.837020] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:35.849 11:46:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:35.849 11:46:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:35.849 11:46:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:35.849 11:46:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.849 11:46:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:35.849 11:46:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.849 11:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.849 11:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.108 11:46:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:36.108 00:18:36.108 real 0m5.320s 00:18:36.108 user 0m22.142s 00:18:36.108 sys 0m1.206s 00:18:36.108 11:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.108 ************************************ 00:18:36.108 END TEST nvmf_host_management 00:18:36.108 ************************************ 00:18:36.108 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 11:46:08 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:36.108 00:18:36.108 real 0m5.989s 00:18:36.108 user 0m22.371s 00:18:36.108 sys 0m1.541s 00:18:36.108 ************************************ 00:18:36.108 END TEST nvmf_host_management 00:18:36.108 ************************************ 00:18:36.108 11:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.108 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 11:46:09 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:36.108 11:46:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.108 11:46:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.108 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 ************************************ 00:18:36.108 START TEST nvmf_lvol 00:18:36.108 ************************************ 00:18:36.108 11:46:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:36.108 * Looking for test storage... 00:18:36.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:36.367 11:46:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:36.367 11:46:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:36.367 11:46:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:36.367 11:46:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:36.367 11:46:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:36.367 11:46:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:36.367 11:46:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:36.367 11:46:09 -- scripts/common.sh@335 -- # IFS=.-: 00:18:36.367 11:46:09 -- scripts/common.sh@335 -- # read -ra ver1 00:18:36.367 11:46:09 -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.367 11:46:09 -- scripts/common.sh@336 -- # read -ra ver2 00:18:36.367 11:46:09 -- scripts/common.sh@337 -- # local 'op=<' 00:18:36.367 11:46:09 -- scripts/common.sh@339 -- # ver1_l=2 00:18:36.367 11:46:09 -- scripts/common.sh@340 -- # ver2_l=1 00:18:36.367 11:46:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:36.367 11:46:09 -- scripts/common.sh@343 -- # case "$op" in 00:18:36.367 11:46:09 -- scripts/common.sh@344 -- # : 1 00:18:36.367 11:46:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:36.367 11:46:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.367 11:46:09 -- scripts/common.sh@364 -- # decimal 1 00:18:36.367 11:46:09 -- scripts/common.sh@352 -- # local d=1 00:18:36.367 11:46:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.367 11:46:09 -- scripts/common.sh@354 -- # echo 1 00:18:36.367 11:46:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:36.367 11:46:09 -- scripts/common.sh@365 -- # decimal 2 00:18:36.367 11:46:09 -- scripts/common.sh@352 -- # local d=2 00:18:36.367 11:46:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.367 11:46:09 -- scripts/common.sh@354 -- # echo 2 00:18:36.367 11:46:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:36.367 11:46:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:36.367 11:46:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:36.367 11:46:09 -- scripts/common.sh@367 -- # return 0 00:18:36.367 11:46:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.367 11:46:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.367 --rc genhtml_branch_coverage=1 00:18:36.367 --rc genhtml_function_coverage=1 00:18:36.367 --rc genhtml_legend=1 00:18:36.367 --rc geninfo_all_blocks=1 00:18:36.367 --rc geninfo_unexecuted_blocks=1 00:18:36.367 00:18:36.367 ' 00:18:36.367 11:46:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.367 --rc genhtml_branch_coverage=1 00:18:36.367 --rc genhtml_function_coverage=1 00:18:36.367 --rc genhtml_legend=1 00:18:36.367 --rc geninfo_all_blocks=1 00:18:36.367 --rc geninfo_unexecuted_blocks=1 00:18:36.367 00:18:36.367 ' 00:18:36.367 11:46:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.367 --rc genhtml_branch_coverage=1 00:18:36.367 --rc genhtml_function_coverage=1 00:18:36.367 --rc genhtml_legend=1 00:18:36.367 --rc geninfo_all_blocks=1 00:18:36.367 --rc geninfo_unexecuted_blocks=1 00:18:36.367 00:18:36.367 ' 00:18:36.367 11:46:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.367 --rc genhtml_branch_coverage=1 00:18:36.367 --rc genhtml_function_coverage=1 00:18:36.367 --rc genhtml_legend=1 00:18:36.367 --rc geninfo_all_blocks=1 00:18:36.367 --rc geninfo_unexecuted_blocks=1 00:18:36.367 00:18:36.367 ' 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:36.367 11:46:09 -- nvmf/common.sh@7 -- # uname -s 00:18:36.367 11:46:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.367 11:46:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.367 11:46:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.367 11:46:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.367 11:46:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.367 11:46:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.367 11:46:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.367 11:46:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.367 11:46:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.367 11:46:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.367 11:46:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:18:36.367 11:46:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:18:36.367 11:46:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.367 11:46:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.367 11:46:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:36.367 11:46:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:36.367 11:46:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.367 11:46:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.367 11:46:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.367 11:46:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.367 11:46:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.367 11:46:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.367 11:46:09 -- paths/export.sh@5 -- # export PATH 00:18:36.367 11:46:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.367 11:46:09 -- nvmf/common.sh@46 -- # : 0 00:18:36.367 11:46:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:36.367 11:46:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:36.367 11:46:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:36.367 11:46:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.367 11:46:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.367 11:46:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:36.367 11:46:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:36.367 11:46:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.367 11:46:09 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:36.367 11:46:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:36.367 11:46:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.367 11:46:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:36.367 11:46:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:36.367 11:46:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:36.367 11:46:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.367 11:46:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.367 11:46:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.367 11:46:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:36.367 11:46:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:36.367 11:46:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:36.367 11:46:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:36.367 11:46:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:36.367 11:46:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:36.367 11:46:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.367 11:46:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.367 11:46:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:36.367 11:46:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:36.367 11:46:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:36.367 11:46:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:36.367 11:46:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:36.367 11:46:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.367 11:46:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:36.367 11:46:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:36.367 11:46:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:36.367 11:46:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:36.367 11:46:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:36.367 11:46:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:36.367 Cannot find device "nvmf_tgt_br" 00:18:36.367 11:46:09 -- nvmf/common.sh@154 -- # true 00:18:36.367 11:46:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:36.367 Cannot find device "nvmf_tgt_br2" 00:18:36.367 11:46:09 -- nvmf/common.sh@155 -- # true 00:18:36.367 11:46:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:36.367 11:46:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:36.367 Cannot find device "nvmf_tgt_br" 00:18:36.367 11:46:09 -- nvmf/common.sh@157 -- # true 00:18:36.367 11:46:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:36.367 Cannot find device "nvmf_tgt_br2" 00:18:36.367 11:46:09 -- nvmf/common.sh@158 -- # true 00:18:36.367 11:46:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:36.627 11:46:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:36.627 11:46:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:36.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.627 11:46:09 -- nvmf/common.sh@161 -- # true 00:18:36.627 11:46:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:36.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.627 11:46:09 -- nvmf/common.sh@162 -- # true 00:18:36.627 11:46:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:36.627 11:46:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:36.627 11:46:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:36.627 11:46:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:36.627 11:46:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:36.627 11:46:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:36.627 11:46:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:36.627 11:46:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:36.627 11:46:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:36.627 11:46:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:36.627 11:46:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:36.627 11:46:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:36.627 11:46:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:36.627 11:46:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:36.627 11:46:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:36.627 11:46:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:36.627 11:46:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:36.627 11:46:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:36.627 11:46:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:36.627 11:46:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:36.627 11:46:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:36.627 11:46:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:36.627 11:46:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.627 11:46:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:36.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:18:36.627 00:18:36.627 --- 10.0.0.2 ping statistics --- 00:18:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.627 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:36.627 11:46:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:36.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.026 ms 00:18:36.627 00:18:36.627 --- 10.0.0.3 ping statistics --- 00:18:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.627 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:36.627 11:46:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:18:36.627 00:18:36.627 --- 10.0.0.1 ping statistics --- 00:18:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.627 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:36.627 11:46:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.627 11:46:09 -- nvmf/common.sh@421 -- # return 0 00:18:36.627 11:46:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:36.627 11:46:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.627 11:46:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:36.627 11:46:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:36.627 11:46:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.627 11:46:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:36.627 11:46:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:36.627 11:46:09 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:36.627 11:46:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:36.627 11:46:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.627 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 11:46:09 -- nvmf/common.sh@469 -- # nvmfpid=72662 00:18:36.627 11:46:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:36.627 11:46:09 -- nvmf/common.sh@470 -- # waitforlisten 72662 00:18:36.627 11:46:09 -- common/autotest_common.sh@829 -- # '[' -z 72662 ']' 00:18:36.627 11:46:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.627 11:46:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.627 11:46:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.627 11:46:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.627 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:36.887 [2024-11-20 11:46:09.697912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:36.887 [2024-11-20 11:46:09.697984] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.887 [2024-11-20 11:46:09.835487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.146 [2024-11-20 11:46:09.982510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:37.146 [2024-11-20 11:46:09.982649] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.146 [2024-11-20 11:46:09.982667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.146 [2024-11-20 11:46:09.982673] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.146 [2024-11-20 11:46:09.982793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.146 [2024-11-20 11:46:09.982921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.146 [2024-11-20 11:46:09.982923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.716 11:46:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.716 11:46:10 -- common/autotest_common.sh@862 -- # return 0 00:18:37.716 11:46:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:37.716 11:46:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.716 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:18:37.716 11:46:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.716 11:46:10 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:37.976 [2024-11-20 11:46:10.868829] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.976 11:46:10 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.235 11:46:11 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:38.235 11:46:11 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.495 11:46:11 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:38.495 11:46:11 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:38.754 11:46:11 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:39.013 11:46:11 -- target/nvmf_lvol.sh@29 -- # lvs=b94c8fe3-f332-4e80-bdaf-718d2ed17858 00:18:39.013 11:46:11 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b94c8fe3-f332-4e80-bdaf-718d2ed17858 lvol 20 00:18:39.273 11:46:12 -- target/nvmf_lvol.sh@32 -- # lvol=7bd7b9ad-1bfb-4bfe-a8d6-bdb5684dc14c 00:18:39.273 11:46:12 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:39.533 11:46:12 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7bd7b9ad-1bfb-4bfe-a8d6-bdb5684dc14c 00:18:39.533 11:46:12 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:39.793 [2024-11-20 11:46:12.728824] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.793 11:46:12 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:40.052 11:46:12 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:40.052 11:46:12 -- target/nvmf_lvol.sh@42 -- # perf_pid=72804 00:18:40.052 11:46:12 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:40.990 11:46:13 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7bd7b9ad-1bfb-4bfe-a8d6-bdb5684dc14c MY_SNAPSHOT 00:18:41.322 11:46:14 -- target/nvmf_lvol.sh@47 -- # snapshot=0dbd2086-60c1-45ff-8955-20c3899bcfe6 00:18:41.322 11:46:14 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7bd7b9ad-1bfb-4bfe-a8d6-bdb5684dc14c 30 00:18:41.581 11:46:14 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0dbd2086-60c1-45ff-8955-20c3899bcfe6 MY_CLONE 00:18:41.841 11:46:14 -- target/nvmf_lvol.sh@49 -- # clone=da06bf60-4800-42cf-a2fa-1e8335947b33 00:18:41.841 11:46:14 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate da06bf60-4800-42cf-a2fa-1e8335947b33 00:18:42.410 11:46:15 -- target/nvmf_lvol.sh@53 -- # wait 72804 00:18:50.540 Initializing NVMe Controllers 00:18:50.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:50.540 Controller IO queue size 128, less than required. 00:18:50.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:50.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:50.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:50.540 Initialization complete. Launching workers. 00:18:50.540 ======================================================== 00:18:50.540 Latency(us) 00:18:50.540 Device Information : IOPS MiB/s Average min max 00:18:50.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7603.69 29.70 16852.34 2273.24 66884.52 00:18:50.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9836.49 38.42 13020.50 1347.66 74033.67 00:18:50.540 ======================================================== 00:18:50.540 Total : 17440.18 68.13 14691.13 1347.66 74033.67 00:18:50.541 00:18:50.541 11:46:23 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:50.541 11:46:23 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7bd7b9ad-1bfb-4bfe-a8d6-bdb5684dc14c 00:18:50.800 11:46:23 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b94c8fe3-f332-4e80-bdaf-718d2ed17858 00:18:51.060 11:46:23 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:51.060 11:46:23 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:51.060 11:46:23 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:51.060 11:46:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:51.060 11:46:23 -- nvmf/common.sh@116 -- # sync 00:18:51.060 11:46:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:51.060 11:46:23 -- nvmf/common.sh@119 -- # set +e 00:18:51.060 11:46:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:51.060 11:46:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:51.060 rmmod nvme_tcp 00:18:51.060 rmmod nvme_fabrics 00:18:51.060 rmmod nvme_keyring 00:18:51.060 11:46:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:51.060 11:46:23 -- nvmf/common.sh@123 -- # set -e 00:18:51.060 11:46:23 -- nvmf/common.sh@124 -- # return 0 00:18:51.060 11:46:23 -- nvmf/common.sh@477 -- # '[' -n 72662 ']' 00:18:51.060 11:46:23 -- nvmf/common.sh@478 -- # killprocess 72662 00:18:51.060 11:46:23 -- common/autotest_common.sh@936 -- # '[' -z 72662 ']' 00:18:51.060 11:46:23 -- common/autotest_common.sh@940 -- # kill -0 72662 00:18:51.060 11:46:24 -- common/autotest_common.sh@941 -- # uname 00:18:51.060 11:46:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.060 11:46:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72662 00:18:51.060 killing process with pid 72662 00:18:51.060 11:46:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.060 11:46:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.060 11:46:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72662' 00:18:51.060 11:46:24 -- common/autotest_common.sh@955 -- # kill 72662 00:18:51.060 11:46:24 -- common/autotest_common.sh@960 -- # wait 72662 00:18:51.320 11:46:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:51.320 11:46:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:51.320 11:46:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:51.320 11:46:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.320 11:46:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:51.320 11:46:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.320 11:46:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.320 11:46:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.580 11:46:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:51.580 00:18:51.580 real 0m15.362s 00:18:51.580 user 1m4.515s 00:18:51.580 sys 0m3.117s 00:18:51.580 11:46:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:51.580 11:46:24 -- common/autotest_common.sh@10 -- # set +x 00:18:51.580 ************************************ 00:18:51.580 END TEST nvmf_lvol 00:18:51.580 ************************************ 00:18:51.580 11:46:24 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:51.580 11:46:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:51.580 11:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.580 11:46:24 -- common/autotest_common.sh@10 -- # set +x 00:18:51.580 ************************************ 00:18:51.580 START TEST nvmf_lvs_grow 00:18:51.580 ************************************ 00:18:51.580 11:46:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:51.580 * Looking for test storage... 00:18:51.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:51.580 11:46:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:51.580 11:46:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:51.580 11:46:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:51.841 11:46:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:51.841 11:46:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:51.841 11:46:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:51.841 11:46:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:51.841 11:46:24 -- scripts/common.sh@335 -- # IFS=.-: 00:18:51.841 11:46:24 -- scripts/common.sh@335 -- # read -ra ver1 00:18:51.841 11:46:24 -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.841 11:46:24 -- scripts/common.sh@336 -- # read -ra ver2 00:18:51.841 11:46:24 -- scripts/common.sh@337 -- # local 'op=<' 00:18:51.841 11:46:24 -- scripts/common.sh@339 -- # ver1_l=2 00:18:51.841 11:46:24 -- scripts/common.sh@340 -- # ver2_l=1 00:18:51.841 11:46:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:51.841 11:46:24 -- scripts/common.sh@343 -- # case "$op" in 00:18:51.841 11:46:24 -- scripts/common.sh@344 -- # : 1 00:18:51.841 11:46:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:51.841 11:46:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.841 11:46:24 -- scripts/common.sh@364 -- # decimal 1 00:18:51.841 11:46:24 -- scripts/common.sh@352 -- # local d=1 00:18:51.841 11:46:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.841 11:46:24 -- scripts/common.sh@354 -- # echo 1 00:18:51.841 11:46:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:51.841 11:46:24 -- scripts/common.sh@365 -- # decimal 2 00:18:51.841 11:46:24 -- scripts/common.sh@352 -- # local d=2 00:18:51.841 11:46:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.841 11:46:24 -- scripts/common.sh@354 -- # echo 2 00:18:51.841 11:46:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:51.841 11:46:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:51.841 11:46:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:51.841 11:46:24 -- scripts/common.sh@367 -- # return 0 00:18:51.841 11:46:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.841 11:46:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:51.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.841 --rc genhtml_branch_coverage=1 00:18:51.841 --rc genhtml_function_coverage=1 00:18:51.841 --rc genhtml_legend=1 00:18:51.841 --rc geninfo_all_blocks=1 00:18:51.841 --rc geninfo_unexecuted_blocks=1 00:18:51.841 00:18:51.841 ' 00:18:51.841 11:46:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:51.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.841 --rc genhtml_branch_coverage=1 00:18:51.841 --rc genhtml_function_coverage=1 00:18:51.841 --rc genhtml_legend=1 00:18:51.841 --rc geninfo_all_blocks=1 00:18:51.841 --rc geninfo_unexecuted_blocks=1 00:18:51.841 00:18:51.841 ' 00:18:51.841 11:46:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:51.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.841 --rc genhtml_branch_coverage=1 00:18:51.841 --rc genhtml_function_coverage=1 00:18:51.841 --rc genhtml_legend=1 00:18:51.841 --rc geninfo_all_blocks=1 00:18:51.841 --rc geninfo_unexecuted_blocks=1 00:18:51.841 00:18:51.841 ' 00:18:51.841 11:46:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:51.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.841 --rc genhtml_branch_coverage=1 00:18:51.841 --rc genhtml_function_coverage=1 00:18:51.841 --rc genhtml_legend=1 00:18:51.841 --rc geninfo_all_blocks=1 00:18:51.841 --rc geninfo_unexecuted_blocks=1 00:18:51.841 00:18:51.841 ' 00:18:51.841 11:46:24 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:51.841 11:46:24 -- nvmf/common.sh@7 -- # uname -s 00:18:51.841 11:46:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.841 11:46:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.841 11:46:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.841 11:46:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.841 11:46:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.841 11:46:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.841 11:46:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.841 11:46:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.841 11:46:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.841 11:46:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.841 11:46:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:18:51.841 11:46:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:18:51.841 11:46:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.841 11:46:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.841 11:46:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:51.841 11:46:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:51.841 11:46:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.841 11:46:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.841 11:46:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.841 11:46:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.841 11:46:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.841 11:46:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.841 11:46:24 -- paths/export.sh@5 -- # export PATH 00:18:51.841 11:46:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.841 11:46:24 -- nvmf/common.sh@46 -- # : 0 00:18:51.842 11:46:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:51.842 11:46:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:51.842 11:46:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:51.842 11:46:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.842 11:46:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.842 11:46:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:51.842 11:46:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:51.842 11:46:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:51.842 11:46:24 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.842 11:46:24 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.842 11:46:24 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:51.842 11:46:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:51.842 11:46:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.842 11:46:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:51.842 11:46:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:51.842 11:46:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:51.842 11:46:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.842 11:46:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.842 11:46:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.842 11:46:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:51.842 11:46:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:51.842 11:46:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:51.842 11:46:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:51.842 11:46:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:51.842 11:46:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:51.842 11:46:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.842 11:46:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.842 11:46:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:51.842 11:46:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:51.842 11:46:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:51.842 11:46:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:51.842 11:46:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:51.842 11:46:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.842 11:46:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:51.842 11:46:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:51.842 11:46:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:51.842 11:46:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:51.842 11:46:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:51.842 11:46:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:51.842 Cannot find device "nvmf_tgt_br" 00:18:51.842 11:46:24 -- nvmf/common.sh@154 -- # true 00:18:51.842 11:46:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.842 Cannot find device "nvmf_tgt_br2" 00:18:51.842 11:46:24 -- nvmf/common.sh@155 -- # true 00:18:51.842 11:46:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:51.842 11:46:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:51.842 Cannot find device "nvmf_tgt_br" 00:18:51.842 11:46:24 -- nvmf/common.sh@157 -- # true 00:18:51.842 11:46:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:51.842 Cannot find device "nvmf_tgt_br2" 00:18:51.842 11:46:24 -- nvmf/common.sh@158 -- # true 00:18:51.842 11:46:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:51.842 11:46:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:51.842 11:46:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.103 11:46:24 -- nvmf/common.sh@161 -- # true 00:18:52.103 11:46:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.103 11:46:24 -- nvmf/common.sh@162 -- # true 00:18:52.103 11:46:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.103 11:46:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.103 11:46:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.103 11:46:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.103 11:46:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.103 11:46:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.103 11:46:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.103 11:46:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.103 11:46:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.103 11:46:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:52.103 11:46:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:52.103 11:46:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:52.103 11:46:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:52.103 11:46:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.103 11:46:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.103 11:46:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.103 11:46:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:52.103 11:46:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:52.103 11:46:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.103 11:46:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.103 11:46:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.103 11:46:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.103 11:46:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.103 11:46:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:52.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:18:52.103 00:18:52.103 --- 10.0.0.2 ping statistics --- 00:18:52.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.103 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:52.103 11:46:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:52.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:52.103 00:18:52.103 --- 10.0.0.3 ping statistics --- 00:18:52.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.103 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:52.103 11:46:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:52.103 00:18:52.103 --- 10.0.0.1 ping statistics --- 00:18:52.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.103 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:52.103 11:46:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.103 11:46:25 -- nvmf/common.sh@421 -- # return 0 00:18:52.103 11:46:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:52.103 11:46:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.103 11:46:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:52.103 11:46:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:52.103 11:46:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.103 11:46:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:52.103 11:46:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:52.103 11:46:25 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:52.103 11:46:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:52.103 11:46:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.103 11:46:25 -- common/autotest_common.sh@10 -- # set +x 00:18:52.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.103 11:46:25 -- nvmf/common.sh@469 -- # nvmfpid=73183 00:18:52.103 11:46:25 -- nvmf/common.sh@470 -- # waitforlisten 73183 00:18:52.103 11:46:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:52.103 11:46:25 -- common/autotest_common.sh@829 -- # '[' -z 73183 ']' 00:18:52.103 11:46:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.103 11:46:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.103 11:46:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.103 11:46:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.103 11:46:25 -- common/autotest_common.sh@10 -- # set +x 00:18:52.103 [2024-11-20 11:46:25.101922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:52.103 [2024-11-20 11:46:25.101981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.362 [2024-11-20 11:46:25.237975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.362 [2024-11-20 11:46:25.321223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:52.362 [2024-11-20 11:46:25.321341] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.362 [2024-11-20 11:46:25.321349] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.362 [2024-11-20 11:46:25.321354] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.362 [2024-11-20 11:46:25.321379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.930 11:46:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.930 11:46:25 -- common/autotest_common.sh@862 -- # return 0 00:18:52.930 11:46:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:52.930 11:46:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.930 11:46:25 -- common/autotest_common.sh@10 -- # set +x 00:18:53.191 11:46:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:53.191 [2024-11-20 11:46:26.196834] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:53.191 11:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:53.191 11:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:53.191 11:46:26 -- common/autotest_common.sh@10 -- # set +x 00:18:53.191 ************************************ 00:18:53.191 START TEST lvs_grow_clean 00:18:53.191 ************************************ 00:18:53.191 11:46:26 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:53.191 11:46:26 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:53.451 11:46:26 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:53.451 11:46:26 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:53.451 11:46:26 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:53.451 11:46:26 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:53.451 11:46:26 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:53.451 11:46:26 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:53.710 11:46:26 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c2b627b6-efe0-4dd8-a270-9ae26375840c 00:18:53.711 11:46:26 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:18:53.711 11:46:26 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:53.970 11:46:26 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:53.970 11:46:26 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:53.970 11:46:26 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c2b627b6-efe0-4dd8-a270-9ae26375840c lvol 150 00:18:54.230 11:46:27 -- target/nvmf_lvs_grow.sh@33 -- # lvol=402bc945-12a2-4fe4-86dc-afbd0d51a6fc 00:18:54.230 11:46:27 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:54.230 11:46:27 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:54.230 [2024-11-20 11:46:27.262620] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:54.230 [2024-11-20 11:46:27.262690] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:54.230 true 00:18:54.490 11:46:27 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:18:54.490 11:46:27 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:54.490 11:46:27 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:54.490 11:46:27 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:54.772 11:46:27 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 402bc945-12a2-4fe4-86dc-afbd0d51a6fc 00:18:55.043 11:46:27 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:55.043 [2024-11-20 11:46:28.053527] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.043 11:46:28 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:55.303 11:46:28 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73339 00:18:55.304 11:46:28 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:55.304 11:46:28 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.304 11:46:28 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73339 /var/tmp/bdevperf.sock 00:18:55.304 11:46:28 -- common/autotest_common.sh@829 -- # '[' -z 73339 ']' 00:18:55.304 11:46:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.304 11:46:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.304 11:46:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.304 11:46:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.304 11:46:28 -- common/autotest_common.sh@10 -- # set +x 00:18:55.304 [2024-11-20 11:46:28.342747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:55.304 [2024-11-20 11:46:28.342804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73339 ] 00:18:55.563 [2024-11-20 11:46:28.481367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.563 [2024-11-20 11:46:28.565342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.501 11:46:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.501 11:46:29 -- common/autotest_common.sh@862 -- # return 0 00:18:56.501 11:46:29 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:56.501 Nvme0n1 00:18:56.501 11:46:29 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:56.760 [ 00:18:56.760 { 00:18:56.760 "aliases": [ 00:18:56.760 "402bc945-12a2-4fe4-86dc-afbd0d51a6fc" 00:18:56.760 ], 00:18:56.760 "assigned_rate_limits": { 00:18:56.760 "r_mbytes_per_sec": 0, 00:18:56.760 "rw_ios_per_sec": 0, 00:18:56.760 "rw_mbytes_per_sec": 0, 00:18:56.760 "w_mbytes_per_sec": 0 00:18:56.760 }, 00:18:56.760 "block_size": 4096, 00:18:56.760 "claimed": false, 00:18:56.760 "driver_specific": { 00:18:56.760 "mp_policy": "active_passive", 00:18:56.760 "nvme": [ 00:18:56.760 { 00:18:56.760 "ctrlr_data": { 00:18:56.760 "ana_reporting": false, 00:18:56.760 "cntlid": 1, 00:18:56.760 "firmware_revision": "24.01.1", 00:18:56.760 "model_number": "SPDK bdev Controller", 00:18:56.760 "multi_ctrlr": true, 00:18:56.760 "oacs": { 00:18:56.760 "firmware": 0, 00:18:56.760 "format": 0, 00:18:56.760 "ns_manage": 0, 00:18:56.760 "security": 0 00:18:56.760 }, 00:18:56.760 "serial_number": "SPDK0", 00:18:56.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.760 "vendor_id": "0x8086" 00:18:56.760 }, 00:18:56.760 "ns_data": { 00:18:56.760 "can_share": true, 00:18:56.760 "id": 1 00:18:56.760 }, 00:18:56.760 "trid": { 00:18:56.760 "adrfam": "IPv4", 00:18:56.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.760 "traddr": "10.0.0.2", 00:18:56.760 "trsvcid": "4420", 00:18:56.760 "trtype": "TCP" 00:18:56.760 }, 00:18:56.760 "vs": { 00:18:56.760 "nvme_version": "1.3" 00:18:56.760 } 00:18:56.760 } 00:18:56.760 ] 00:18:56.760 }, 00:18:56.760 "name": "Nvme0n1", 00:18:56.760 "num_blocks": 38912, 00:18:56.760 "product_name": "NVMe disk", 00:18:56.760 "supported_io_types": { 00:18:56.760 "abort": true, 00:18:56.760 "compare": true, 00:18:56.760 "compare_and_write": true, 00:18:56.760 "flush": true, 00:18:56.760 "nvme_admin": true, 00:18:56.760 "nvme_io": true, 00:18:56.760 "read": true, 00:18:56.760 "reset": true, 00:18:56.760 "unmap": true, 00:18:56.760 "write": true, 00:18:56.760 "write_zeroes": true 00:18:56.760 }, 00:18:56.760 "uuid": "402bc945-12a2-4fe4-86dc-afbd0d51a6fc", 00:18:56.760 "zoned": false 00:18:56.760 } 00:18:56.760 ] 00:18:56.760 11:46:29 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.760 11:46:29 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73388 00:18:56.760 11:46:29 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:56.760 Running I/O for 10 seconds... 00:18:58.141 Latency(us) 00:18:58.141 [2024-11-20T11:46:31.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.141 [2024-11-20T11:46:31.184Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.141 Nvme0n1 : 1.00 10091.00 39.42 0.00 0.00 0.00 0.00 0.00 00:18:58.141 [2024-11-20T11:46:31.184Z] =================================================================================================================== 00:18:58.141 [2024-11-20T11:46:31.184Z] Total : 10091.00 39.42 0.00 0.00 0.00 0.00 0.00 00:18:58.141 00:18:58.711 11:46:31 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:18:58.971 [2024-11-20T11:46:32.014Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.971 Nvme0n1 : 2.00 10347.50 40.42 0.00 0.00 0.00 0.00 0.00 00:18:58.971 [2024-11-20T11:46:32.014Z] =================================================================================================================== 00:18:58.971 [2024-11-20T11:46:32.014Z] Total : 10347.50 40.42 0.00 0.00 0.00 0.00 0.00 00:18:58.971 00:18:58.971 true 00:18:58.971 11:46:31 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:18:58.971 11:46:31 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:59.230 11:46:32 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:59.230 11:46:32 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:59.230 11:46:32 -- target/nvmf_lvs_grow.sh@65 -- # wait 73388 00:18:59.797 [2024-11-20T11:46:32.840Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.797 Nvme0n1 : 3.00 10761.33 42.04 0.00 0.00 0.00 0.00 0.00 00:18:59.797 [2024-11-20T11:46:32.840Z] =================================================================================================================== 00:18:59.797 [2024-11-20T11:46:32.840Z] Total : 10761.33 42.04 0.00 0.00 0.00 0.00 0.00 00:18:59.797 00:19:00.737 [2024-11-20T11:46:33.780Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.737 Nvme0n1 : 4.00 10591.75 41.37 0.00 0.00 0.00 0.00 0.00 00:19:00.737 [2024-11-20T11:46:33.780Z] =================================================================================================================== 00:19:00.737 [2024-11-20T11:46:33.780Z] Total : 10591.75 41.37 0.00 0.00 0.00 0.00 0.00 00:19:00.737 00:19:02.139 [2024-11-20T11:46:35.182Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:02.139 Nvme0n1 : 5.00 10793.80 42.16 0.00 0.00 0.00 0.00 0.00 00:19:02.139 [2024-11-20T11:46:35.182Z] =================================================================================================================== 00:19:02.139 [2024-11-20T11:46:35.182Z] Total : 10793.80 42.16 0.00 0.00 0.00 0.00 0.00 00:19:02.139 00:19:02.708 [2024-11-20T11:46:35.751Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:02.708 Nvme0n1 : 6.00 10889.50 42.54 0.00 0.00 0.00 0.00 0.00 00:19:02.708 [2024-11-20T11:46:35.751Z] =================================================================================================================== 00:19:02.708 [2024-11-20T11:46:35.751Z] Total : 10889.50 42.54 0.00 0.00 0.00 0.00 0.00 00:19:02.708 00:19:04.086 [2024-11-20T11:46:37.129Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:04.086 Nvme0n1 : 7.00 10856.43 42.41 0.00 0.00 0.00 0.00 0.00 00:19:04.086 [2024-11-20T11:46:37.129Z] =================================================================================================================== 00:19:04.086 [2024-11-20T11:46:37.129Z] Total : 10856.43 42.41 0.00 0.00 0.00 0.00 0.00 00:19:04.086 00:19:05.022 [2024-11-20T11:46:38.065Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:05.022 Nvme0n1 : 8.00 10934.38 42.71 0.00 0.00 0.00 0.00 0.00 00:19:05.022 [2024-11-20T11:46:38.065Z] =================================================================================================================== 00:19:05.022 [2024-11-20T11:46:38.065Z] Total : 10934.38 42.71 0.00 0.00 0.00 0.00 0.00 00:19:05.022 00:19:05.958 [2024-11-20T11:46:39.001Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:05.958 Nvme0n1 : 9.00 10975.33 42.87 0.00 0.00 0.00 0.00 0.00 00:19:05.958 [2024-11-20T11:46:39.001Z] =================================================================================================================== 00:19:05.958 [2024-11-20T11:46:39.001Z] Total : 10975.33 42.87 0.00 0.00 0.00 0.00 0.00 00:19:05.958 00:19:06.894 [2024-11-20T11:46:39.937Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.894 Nvme0n1 : 10.00 10997.50 42.96 0.00 0.00 0.00 0.00 0.00 00:19:06.894 [2024-11-20T11:46:39.937Z] =================================================================================================================== 00:19:06.894 [2024-11-20T11:46:39.937Z] Total : 10997.50 42.96 0.00 0.00 0.00 0.00 0.00 00:19:06.894 00:19:06.894 00:19:06.894 Latency(us) 00:19:06.894 [2024-11-20T11:46:39.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.894 [2024-11-20T11:46:39.937Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.894 Nvme0n1 : 10.00 11005.73 42.99 0.00 0.00 11626.54 4979.59 189567.89 00:19:06.894 [2024-11-20T11:46:39.937Z] =================================================================================================================== 00:19:06.894 [2024-11-20T11:46:39.937Z] Total : 11005.73 42.99 0.00 0.00 11626.54 4979.59 189567.89 00:19:06.894 0 00:19:06.894 11:46:39 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73339 00:19:06.894 11:46:39 -- common/autotest_common.sh@936 -- # '[' -z 73339 ']' 00:19:06.894 11:46:39 -- common/autotest_common.sh@940 -- # kill -0 73339 00:19:06.894 11:46:39 -- common/autotest_common.sh@941 -- # uname 00:19:06.894 11:46:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.894 11:46:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73339 00:19:06.894 killing process with pid 73339 00:19:06.894 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.894 00:19:06.894 Latency(us) 00:19:06.894 [2024-11-20T11:46:39.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.894 [2024-11-20T11:46:39.937Z] =================================================================================================================== 00:19:06.894 [2024-11-20T11:46:39.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.894 11:46:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:06.894 11:46:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:06.894 11:46:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73339' 00:19:06.894 11:46:39 -- common/autotest_common.sh@955 -- # kill 73339 00:19:06.894 11:46:39 -- common/autotest_common.sh@960 -- # wait 73339 00:19:07.152 11:46:40 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:07.411 11:46:40 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:07.411 11:46:40 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:07.670 11:46:40 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:07.670 11:46:40 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:07.670 11:46:40 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:07.670 [2024-11-20 11:46:40.633376] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:07.670 11:46:40 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:07.670 11:46:40 -- common/autotest_common.sh@650 -- # local es=0 00:19:07.670 11:46:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:07.670 11:46:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.670 11:46:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.670 11:46:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.670 11:46:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.670 11:46:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.670 11:46:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.670 11:46:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.670 11:46:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:07.670 11:46:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:07.929 2024/11/20 11:46:40 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c2b627b6-efe0-4dd8-a270-9ae26375840c], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:19:07.929 request: 00:19:07.929 { 00:19:07.929 "method": "bdev_lvol_get_lvstores", 00:19:07.929 "params": { 00:19:07.929 "uuid": "c2b627b6-efe0-4dd8-a270-9ae26375840c" 00:19:07.929 } 00:19:07.929 } 00:19:07.929 Got JSON-RPC error response 00:19:07.929 GoRPCClient: error on JSON-RPC call 00:19:07.929 11:46:40 -- common/autotest_common.sh@653 -- # es=1 00:19:07.929 11:46:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.929 11:46:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.929 11:46:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.929 11:46:40 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:08.187 aio_bdev 00:19:08.187 11:46:41 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 402bc945-12a2-4fe4-86dc-afbd0d51a6fc 00:19:08.187 11:46:41 -- common/autotest_common.sh@897 -- # local bdev_name=402bc945-12a2-4fe4-86dc-afbd0d51a6fc 00:19:08.187 11:46:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:08.187 11:46:41 -- common/autotest_common.sh@899 -- # local i 00:19:08.187 11:46:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:08.187 11:46:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:08.187 11:46:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:08.461 11:46:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 402bc945-12a2-4fe4-86dc-afbd0d51a6fc -t 2000 00:19:08.738 [ 00:19:08.738 { 00:19:08.738 "aliases": [ 00:19:08.738 "lvs/lvol" 00:19:08.738 ], 00:19:08.738 "assigned_rate_limits": { 00:19:08.738 "r_mbytes_per_sec": 0, 00:19:08.738 "rw_ios_per_sec": 0, 00:19:08.738 "rw_mbytes_per_sec": 0, 00:19:08.738 "w_mbytes_per_sec": 0 00:19:08.738 }, 00:19:08.738 "block_size": 4096, 00:19:08.738 "claimed": false, 00:19:08.738 "driver_specific": { 00:19:08.738 "lvol": { 00:19:08.738 "base_bdev": "aio_bdev", 00:19:08.738 "clone": false, 00:19:08.738 "esnap_clone": false, 00:19:08.738 "lvol_store_uuid": "c2b627b6-efe0-4dd8-a270-9ae26375840c", 00:19:08.738 "snapshot": false, 00:19:08.738 "thin_provision": false 00:19:08.738 } 00:19:08.738 }, 00:19:08.738 "name": "402bc945-12a2-4fe4-86dc-afbd0d51a6fc", 00:19:08.738 "num_blocks": 38912, 00:19:08.738 "product_name": "Logical Volume", 00:19:08.738 "supported_io_types": { 00:19:08.738 "abort": false, 00:19:08.738 "compare": false, 00:19:08.738 "compare_and_write": false, 00:19:08.738 "flush": false, 00:19:08.738 "nvme_admin": false, 00:19:08.738 "nvme_io": false, 00:19:08.738 "read": true, 00:19:08.738 "reset": true, 00:19:08.738 "unmap": true, 00:19:08.738 "write": true, 00:19:08.738 "write_zeroes": true 00:19:08.738 }, 00:19:08.738 "uuid": "402bc945-12a2-4fe4-86dc-afbd0d51a6fc", 00:19:08.738 "zoned": false 00:19:08.738 } 00:19:08.738 ] 00:19:08.738 11:46:41 -- common/autotest_common.sh@905 -- # return 0 00:19:08.738 11:46:41 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:08.738 11:46:41 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:08.738 11:46:41 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:08.738 11:46:41 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:08.738 11:46:41 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:08.996 11:46:41 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:08.996 11:46:41 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 402bc945-12a2-4fe4-86dc-afbd0d51a6fc 00:19:09.254 11:46:42 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2b627b6-efe0-4dd8-a270-9ae26375840c 00:19:09.513 11:46:42 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:09.772 11:46:42 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:10.030 ************************************ 00:19:10.030 END TEST lvs_grow_clean 00:19:10.030 ************************************ 00:19:10.030 00:19:10.030 real 0m16.791s 00:19:10.030 user 0m15.799s 00:19:10.030 sys 0m2.235s 00:19:10.030 11:46:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:10.030 11:46:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.030 11:46:43 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:10.030 11:46:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.030 11:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.030 11:46:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.288 ************************************ 00:19:10.288 START TEST lvs_grow_dirty 00:19:10.288 ************************************ 00:19:10.288 11:46:43 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:10.288 11:46:43 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:10.546 11:46:43 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:10.546 11:46:43 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:10.546 11:46:43 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:10.805 11:46:43 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:10.805 11:46:43 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:10.805 11:46:43 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 lvol 150 00:19:11.063 11:46:43 -- target/nvmf_lvs_grow.sh@33 -- # lvol=da73e048-b259-4461-87ee-2346d0e11473 00:19:11.063 11:46:43 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:11.063 11:46:44 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:11.321 [2024-11-20 11:46:44.188617] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:11.321 [2024-11-20 11:46:44.188721] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:11.321 true 00:19:11.321 11:46:44 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:11.321 11:46:44 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:11.579 11:46:44 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:11.579 11:46:44 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:11.579 11:46:44 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da73e048-b259-4461-87ee-2346d0e11473 00:19:11.838 11:46:44 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:12.097 11:46:45 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:12.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.356 11:46:45 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:12.356 11:46:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73764 00:19:12.356 11:46:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.356 11:46:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73764 /var/tmp/bdevperf.sock 00:19:12.356 11:46:45 -- common/autotest_common.sh@829 -- # '[' -z 73764 ']' 00:19:12.356 11:46:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.356 11:46:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.356 11:46:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.356 11:46:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.356 11:46:45 -- common/autotest_common.sh@10 -- # set +x 00:19:12.356 [2024-11-20 11:46:45.243532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:12.356 [2024-11-20 11:46:45.243595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:19:12.356 [2024-11-20 11:46:45.382079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.616 [2024-11-20 11:46:45.474613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.180 11:46:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.180 11:46:46 -- common/autotest_common.sh@862 -- # return 0 00:19:13.180 11:46:46 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:13.439 Nvme0n1 00:19:13.439 11:46:46 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:13.697 [ 00:19:13.697 { 00:19:13.697 "aliases": [ 00:19:13.697 "da73e048-b259-4461-87ee-2346d0e11473" 00:19:13.697 ], 00:19:13.698 "assigned_rate_limits": { 00:19:13.698 "r_mbytes_per_sec": 0, 00:19:13.698 "rw_ios_per_sec": 0, 00:19:13.698 "rw_mbytes_per_sec": 0, 00:19:13.698 "w_mbytes_per_sec": 0 00:19:13.698 }, 00:19:13.698 "block_size": 4096, 00:19:13.698 "claimed": false, 00:19:13.698 "driver_specific": { 00:19:13.698 "mp_policy": "active_passive", 00:19:13.698 "nvme": [ 00:19:13.698 { 00:19:13.698 "ctrlr_data": { 00:19:13.698 "ana_reporting": false, 00:19:13.698 "cntlid": 1, 00:19:13.698 "firmware_revision": "24.01.1", 00:19:13.698 "model_number": "SPDK bdev Controller", 00:19:13.698 "multi_ctrlr": true, 00:19:13.698 "oacs": { 00:19:13.698 "firmware": 0, 00:19:13.698 "format": 0, 00:19:13.698 "ns_manage": 0, 00:19:13.698 "security": 0 00:19:13.698 }, 00:19:13.698 "serial_number": "SPDK0", 00:19:13.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:13.698 "vendor_id": "0x8086" 00:19:13.698 }, 00:19:13.698 "ns_data": { 00:19:13.698 "can_share": true, 00:19:13.698 "id": 1 00:19:13.698 }, 00:19:13.698 "trid": { 00:19:13.698 "adrfam": "IPv4", 00:19:13.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:13.698 "traddr": "10.0.0.2", 00:19:13.698 "trsvcid": "4420", 00:19:13.698 "trtype": "TCP" 00:19:13.698 }, 00:19:13.698 "vs": { 00:19:13.698 "nvme_version": "1.3" 00:19:13.698 } 00:19:13.698 } 00:19:13.698 ] 00:19:13.698 }, 00:19:13.698 "name": "Nvme0n1", 00:19:13.698 "num_blocks": 38912, 00:19:13.698 "product_name": "NVMe disk", 00:19:13.698 "supported_io_types": { 00:19:13.698 "abort": true, 00:19:13.698 "compare": true, 00:19:13.698 "compare_and_write": true, 00:19:13.698 "flush": true, 00:19:13.698 "nvme_admin": true, 00:19:13.698 "nvme_io": true, 00:19:13.698 "read": true, 00:19:13.698 "reset": true, 00:19:13.698 "unmap": true, 00:19:13.698 "write": true, 00:19:13.698 "write_zeroes": true 00:19:13.698 }, 00:19:13.698 "uuid": "da73e048-b259-4461-87ee-2346d0e11473", 00:19:13.698 "zoned": false 00:19:13.698 } 00:19:13.698 ] 00:19:13.698 11:46:46 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:13.698 11:46:46 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73806 00:19:13.698 11:46:46 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:13.698 Running I/O for 10 seconds... 00:19:15.072 Latency(us) 00:19:15.072 [2024-11-20T11:46:48.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.072 [2024-11-20T11:46:48.115Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.072 Nvme0n1 : 1.00 12991.00 50.75 0.00 0.00 0.00 0.00 0.00 00:19:15.072 [2024-11-20T11:46:48.115Z] =================================================================================================================== 00:19:15.072 [2024-11-20T11:46:48.115Z] Total : 12991.00 50.75 0.00 0.00 0.00 0.00 0.00 00:19:15.072 00:19:15.636 11:46:48 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:15.894 [2024-11-20T11:46:48.937Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.895 Nvme0n1 : 2.00 12833.50 50.13 0.00 0.00 0.00 0.00 0.00 00:19:15.895 [2024-11-20T11:46:48.938Z] =================================================================================================================== 00:19:15.895 [2024-11-20T11:46:48.938Z] Total : 12833.50 50.13 0.00 0.00 0.00 0.00 0.00 00:19:15.895 00:19:15.895 true 00:19:15.895 11:46:48 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:15.895 11:46:48 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:16.153 11:46:49 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:16.153 11:46:49 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:16.153 11:46:49 -- target/nvmf_lvs_grow.sh@65 -- # wait 73806 00:19:16.720 [2024-11-20T11:46:49.763Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:16.720 Nvme0n1 : 3.00 12555.00 49.04 0.00 0.00 0.00 0.00 0.00 00:19:16.720 [2024-11-20T11:46:49.763Z] =================================================================================================================== 00:19:16.720 [2024-11-20T11:46:49.763Z] Total : 12555.00 49.04 0.00 0.00 0.00 0.00 0.00 00:19:16.720 00:19:17.658 [2024-11-20T11:46:50.701Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:17.658 Nvme0n1 : 4.00 12415.00 48.50 0.00 0.00 0.00 0.00 0.00 00:19:17.658 [2024-11-20T11:46:50.701Z] =================================================================================================================== 00:19:17.658 [2024-11-20T11:46:50.701Z] Total : 12415.00 48.50 0.00 0.00 0.00 0.00 0.00 00:19:17.658 00:19:19.039 [2024-11-20T11:46:52.082Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:19.039 Nvme0n1 : 5.00 12044.80 47.05 0.00 0.00 0.00 0.00 0.00 00:19:19.039 [2024-11-20T11:46:52.082Z] =================================================================================================================== 00:19:19.039 [2024-11-20T11:46:52.082Z] Total : 12044.80 47.05 0.00 0.00 0.00 0.00 0.00 00:19:19.039 00:19:19.981 [2024-11-20T11:46:53.024Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:19.981 Nvme0n1 : 6.00 11538.00 45.07 0.00 0.00 0.00 0.00 0.00 00:19:19.981 [2024-11-20T11:46:53.024Z] =================================================================================================================== 00:19:19.981 [2024-11-20T11:46:53.024Z] Total : 11538.00 45.07 0.00 0.00 0.00 0.00 0.00 00:19:19.981 00:19:20.920 [2024-11-20T11:46:53.963Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:20.920 Nvme0n1 : 7.00 10792.29 42.16 0.00 0.00 0.00 0.00 0.00 00:19:20.920 [2024-11-20T11:46:53.963Z] =================================================================================================================== 00:19:20.920 [2024-11-20T11:46:53.963Z] Total : 10792.29 42.16 0.00 0.00 0.00 0.00 0.00 00:19:20.920 00:19:21.859 [2024-11-20T11:46:54.902Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:21.859 Nvme0n1 : 8.00 10610.12 41.45 0.00 0.00 0.00 0.00 0.00 00:19:21.859 [2024-11-20T11:46:54.902Z] =================================================================================================================== 00:19:21.859 [2024-11-20T11:46:54.902Z] Total : 10610.12 41.45 0.00 0.00 0.00 0.00 0.00 00:19:21.859 00:19:22.800 [2024-11-20T11:46:55.843Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:22.800 Nvme0n1 : 9.00 10593.56 41.38 0.00 0.00 0.00 0.00 0.00 00:19:22.800 [2024-11-20T11:46:55.843Z] =================================================================================================================== 00:19:22.800 [2024-11-20T11:46:55.843Z] Total : 10593.56 41.38 0.00 0.00 0.00 0.00 0.00 00:19:22.800 00:19:23.740 [2024-11-20T11:46:56.783Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.740 Nvme0n1 : 10.00 10610.70 41.45 0.00 0.00 0.00 0.00 0.00 00:19:23.740 [2024-11-20T11:46:56.783Z] =================================================================================================================== 00:19:23.740 [2024-11-20T11:46:56.783Z] Total : 10610.70 41.45 0.00 0.00 0.00 0.00 0.00 00:19:23.740 00:19:23.740 00:19:23.740 Latency(us) 00:19:23.740 [2024-11-20T11:46:56.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.740 [2024-11-20T11:46:56.783Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.740 Nvme0n1 : 10.01 10616.61 41.47 0.00 0.00 12053.94 3219.56 582440.47 00:19:23.740 [2024-11-20T11:46:56.783Z] =================================================================================================================== 00:19:23.740 [2024-11-20T11:46:56.783Z] Total : 10616.61 41.47 0.00 0.00 12053.94 3219.56 582440.47 00:19:23.740 0 00:19:23.740 11:46:56 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73764 00:19:23.740 11:46:56 -- common/autotest_common.sh@936 -- # '[' -z 73764 ']' 00:19:23.740 11:46:56 -- common/autotest_common.sh@940 -- # kill -0 73764 00:19:23.740 11:46:56 -- common/autotest_common.sh@941 -- # uname 00:19:23.740 11:46:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.740 11:46:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73764 00:19:23.740 killing process with pid 73764 00:19:23.740 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.740 00:19:23.740 Latency(us) 00:19:23.740 [2024-11-20T11:46:56.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.740 [2024-11-20T11:46:56.783Z] =================================================================================================================== 00:19:23.740 [2024-11-20T11:46:56.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.740 11:46:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:23.740 11:46:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:23.740 11:46:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73764' 00:19:23.740 11:46:56 -- common/autotest_common.sh@955 -- # kill 73764 00:19:23.740 11:46:56 -- common/autotest_common.sh@960 -- # wait 73764 00:19:24.000 11:46:56 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:24.259 11:46:57 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:24.259 11:46:57 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:24.520 11:46:57 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:24.520 11:46:57 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:24.520 11:46:57 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73183 00:19:24.520 11:46:57 -- target/nvmf_lvs_grow.sh@74 -- # wait 73183 00:19:24.520 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73183 Killed "${NVMF_APP[@]}" "$@" 00:19:24.520 11:46:57 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:24.520 11:46:57 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:24.520 11:46:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:24.520 11:46:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.520 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:19:24.520 11:46:57 -- nvmf/common.sh@469 -- # nvmfpid=73962 00:19:24.520 11:46:57 -- nvmf/common.sh@470 -- # waitforlisten 73962 00:19:24.520 11:46:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:24.520 11:46:57 -- common/autotest_common.sh@829 -- # '[' -z 73962 ']' 00:19:24.520 11:46:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.520 11:46:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.520 11:46:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.520 11:46:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.520 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:19:24.520 [2024-11-20 11:46:57.515568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:24.520 [2024-11-20 11:46:57.515645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.780 [2024-11-20 11:46:57.657190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.780 [2024-11-20 11:46:57.733008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:24.780 [2024-11-20 11:46:57.733120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.780 [2024-11-20 11:46:57.733126] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.780 [2024-11-20 11:46:57.733131] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.780 [2024-11-20 11:46:57.733151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.350 11:46:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.350 11:46:58 -- common/autotest_common.sh@862 -- # return 0 00:19:25.350 11:46:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:25.350 11:46:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.350 11:46:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.610 11:46:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.610 11:46:58 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:25.610 [2024-11-20 11:46:58.618571] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:25.610 [2024-11-20 11:46:58.619570] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:25.610 [2024-11-20 11:46:58.619740] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:25.869 11:46:58 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:25.869 11:46:58 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev da73e048-b259-4461-87ee-2346d0e11473 00:19:25.869 11:46:58 -- common/autotest_common.sh@897 -- # local bdev_name=da73e048-b259-4461-87ee-2346d0e11473 00:19:25.869 11:46:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:25.869 11:46:58 -- common/autotest_common.sh@899 -- # local i 00:19:25.869 11:46:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:25.869 11:46:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:25.869 11:46:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:25.869 11:46:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da73e048-b259-4461-87ee-2346d0e11473 -t 2000 00:19:26.130 [ 00:19:26.130 { 00:19:26.130 "aliases": [ 00:19:26.130 "lvs/lvol" 00:19:26.130 ], 00:19:26.130 "assigned_rate_limits": { 00:19:26.130 "r_mbytes_per_sec": 0, 00:19:26.130 "rw_ios_per_sec": 0, 00:19:26.130 "rw_mbytes_per_sec": 0, 00:19:26.130 "w_mbytes_per_sec": 0 00:19:26.130 }, 00:19:26.130 "block_size": 4096, 00:19:26.130 "claimed": false, 00:19:26.130 "driver_specific": { 00:19:26.130 "lvol": { 00:19:26.130 "base_bdev": "aio_bdev", 00:19:26.130 "clone": false, 00:19:26.130 "esnap_clone": false, 00:19:26.130 "lvol_store_uuid": "0cb0bd3f-e852-47d8-9297-f67b758baa89", 00:19:26.130 "snapshot": false, 00:19:26.130 "thin_provision": false 00:19:26.130 } 00:19:26.130 }, 00:19:26.130 "name": "da73e048-b259-4461-87ee-2346d0e11473", 00:19:26.130 "num_blocks": 38912, 00:19:26.130 "product_name": "Logical Volume", 00:19:26.130 "supported_io_types": { 00:19:26.130 "abort": false, 00:19:26.130 "compare": false, 00:19:26.130 "compare_and_write": false, 00:19:26.130 "flush": false, 00:19:26.130 "nvme_admin": false, 00:19:26.130 "nvme_io": false, 00:19:26.130 "read": true, 00:19:26.130 "reset": true, 00:19:26.130 "unmap": true, 00:19:26.130 "write": true, 00:19:26.130 "write_zeroes": true 00:19:26.130 }, 00:19:26.130 "uuid": "da73e048-b259-4461-87ee-2346d0e11473", 00:19:26.130 "zoned": false 00:19:26.130 } 00:19:26.130 ] 00:19:26.130 11:46:59 -- common/autotest_common.sh@905 -- # return 0 00:19:26.130 11:46:59 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:26.130 11:46:59 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:26.389 11:46:59 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:26.389 11:46:59 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:26.389 11:46:59 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:26.648 11:46:59 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:26.648 11:46:59 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:26.907 [2024-11-20 11:46:59.702313] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:26.907 11:46:59 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:26.907 11:46:59 -- common/autotest_common.sh@650 -- # local es=0 00:19:26.907 11:46:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:26.907 11:46:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.907 11:46:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.907 11:46:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.907 11:46:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.907 11:46:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.907 11:46:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.907 11:46:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.907 11:46:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:26.907 11:46:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:26.907 2024/11/20 11:46:59 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0cb0bd3f-e852-47d8-9297-f67b758baa89], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:19:26.907 request: 00:19:26.907 { 00:19:26.907 "method": "bdev_lvol_get_lvstores", 00:19:26.907 "params": { 00:19:26.907 "uuid": "0cb0bd3f-e852-47d8-9297-f67b758baa89" 00:19:26.907 } 00:19:26.907 } 00:19:26.907 Got JSON-RPC error response 00:19:26.907 GoRPCClient: error on JSON-RPC call 00:19:27.166 11:46:59 -- common/autotest_common.sh@653 -- # es=1 00:19:27.166 11:46:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.166 11:46:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.166 11:46:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.166 11:46:59 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:27.166 aio_bdev 00:19:27.166 11:47:00 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev da73e048-b259-4461-87ee-2346d0e11473 00:19:27.166 11:47:00 -- common/autotest_common.sh@897 -- # local bdev_name=da73e048-b259-4461-87ee-2346d0e11473 00:19:27.167 11:47:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:27.167 11:47:00 -- common/autotest_common.sh@899 -- # local i 00:19:27.167 11:47:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:27.167 11:47:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:27.167 11:47:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:27.426 11:47:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da73e048-b259-4461-87ee-2346d0e11473 -t 2000 00:19:27.685 [ 00:19:27.685 { 00:19:27.685 "aliases": [ 00:19:27.685 "lvs/lvol" 00:19:27.685 ], 00:19:27.685 "assigned_rate_limits": { 00:19:27.685 "r_mbytes_per_sec": 0, 00:19:27.685 "rw_ios_per_sec": 0, 00:19:27.685 "rw_mbytes_per_sec": 0, 00:19:27.685 "w_mbytes_per_sec": 0 00:19:27.685 }, 00:19:27.685 "block_size": 4096, 00:19:27.685 "claimed": false, 00:19:27.685 "driver_specific": { 00:19:27.685 "lvol": { 00:19:27.685 "base_bdev": "aio_bdev", 00:19:27.685 "clone": false, 00:19:27.685 "esnap_clone": false, 00:19:27.685 "lvol_store_uuid": "0cb0bd3f-e852-47d8-9297-f67b758baa89", 00:19:27.685 "snapshot": false, 00:19:27.685 "thin_provision": false 00:19:27.685 } 00:19:27.685 }, 00:19:27.685 "name": "da73e048-b259-4461-87ee-2346d0e11473", 00:19:27.685 "num_blocks": 38912, 00:19:27.685 "product_name": "Logical Volume", 00:19:27.685 "supported_io_types": { 00:19:27.685 "abort": false, 00:19:27.685 "compare": false, 00:19:27.685 "compare_and_write": false, 00:19:27.685 "flush": false, 00:19:27.685 "nvme_admin": false, 00:19:27.685 "nvme_io": false, 00:19:27.685 "read": true, 00:19:27.685 "reset": true, 00:19:27.685 "unmap": true, 00:19:27.685 "write": true, 00:19:27.685 "write_zeroes": true 00:19:27.685 }, 00:19:27.685 "uuid": "da73e048-b259-4461-87ee-2346d0e11473", 00:19:27.685 "zoned": false 00:19:27.685 } 00:19:27.685 ] 00:19:27.685 11:47:00 -- common/autotest_common.sh@905 -- # return 0 00:19:27.685 11:47:00 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:27.685 11:47:00 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:27.945 11:47:00 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:27.945 11:47:00 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:27.945 11:47:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:27.945 11:47:00 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:27.945 11:47:00 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete da73e048-b259-4461-87ee-2346d0e11473 00:19:28.203 11:47:01 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cb0bd3f-e852-47d8-9297-f67b758baa89 00:19:28.462 11:47:01 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:28.721 11:47:01 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:28.980 00:19:28.980 real 0m18.912s 00:19:28.980 user 0m37.673s 00:19:28.980 sys 0m6.996s 00:19:28.980 11:47:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:28.980 11:47:01 -- common/autotest_common.sh@10 -- # set +x 00:19:28.980 ************************************ 00:19:28.980 END TEST lvs_grow_dirty 00:19:28.980 ************************************ 00:19:29.239 11:47:02 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:29.239 11:47:02 -- common/autotest_common.sh@806 -- # type=--id 00:19:29.239 11:47:02 -- common/autotest_common.sh@807 -- # id=0 00:19:29.239 11:47:02 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:29.239 11:47:02 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:29.239 11:47:02 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:29.239 11:47:02 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:29.239 11:47:02 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:29.239 11:47:02 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:29.239 nvmf_trace.0 00:19:29.239 11:47:02 -- common/autotest_common.sh@821 -- # return 0 00:19:29.239 11:47:02 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:29.239 11:47:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:29.239 11:47:02 -- nvmf/common.sh@116 -- # sync 00:19:29.830 11:47:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:29.830 11:47:02 -- nvmf/common.sh@119 -- # set +e 00:19:29.830 11:47:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:29.830 11:47:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:29.830 rmmod nvme_tcp 00:19:29.830 rmmod nvme_fabrics 00:19:29.830 rmmod nvme_keyring 00:19:29.830 11:47:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:29.830 11:47:02 -- nvmf/common.sh@123 -- # set -e 00:19:29.830 11:47:02 -- nvmf/common.sh@124 -- # return 0 00:19:29.830 11:47:02 -- nvmf/common.sh@477 -- # '[' -n 73962 ']' 00:19:29.830 11:47:02 -- nvmf/common.sh@478 -- # killprocess 73962 00:19:29.830 11:47:02 -- common/autotest_common.sh@936 -- # '[' -z 73962 ']' 00:19:29.830 11:47:02 -- common/autotest_common.sh@940 -- # kill -0 73962 00:19:29.830 11:47:02 -- common/autotest_common.sh@941 -- # uname 00:19:29.830 11:47:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:29.830 11:47:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73962 00:19:29.830 11:47:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:29.830 11:47:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:29.830 killing process with pid 73962 00:19:29.830 11:47:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73962' 00:19:29.830 11:47:02 -- common/autotest_common.sh@955 -- # kill 73962 00:19:29.830 11:47:02 -- common/autotest_common.sh@960 -- # wait 73962 00:19:29.830 11:47:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:29.830 11:47:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:29.830 11:47:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:29.830 11:47:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.830 11:47:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:29.830 11:47:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.830 11:47:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.830 11:47:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.091 11:47:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:30.091 00:19:30.091 real 0m38.460s 00:19:30.091 user 0m59.380s 00:19:30.091 sys 0m10.280s 00:19:30.091 11:47:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:30.091 11:47:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.091 ************************************ 00:19:30.091 END TEST nvmf_lvs_grow 00:19:30.091 ************************************ 00:19:30.091 11:47:02 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:30.091 11:47:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.091 11:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.091 11:47:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.091 ************************************ 00:19:30.091 START TEST nvmf_bdev_io_wait 00:19:30.091 ************************************ 00:19:30.091 11:47:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:30.091 * Looking for test storage... 00:19:30.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.091 11:47:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:30.091 11:47:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:30.091 11:47:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:30.351 11:47:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:30.351 11:47:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:30.351 11:47:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:30.351 11:47:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:30.351 11:47:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:30.351 11:47:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:30.351 11:47:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.351 11:47:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:30.352 11:47:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:30.352 11:47:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:30.352 11:47:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:30.352 11:47:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:30.352 11:47:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:30.352 11:47:03 -- scripts/common.sh@344 -- # : 1 00:19:30.352 11:47:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:30.352 11:47:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.352 11:47:03 -- scripts/common.sh@364 -- # decimal 1 00:19:30.352 11:47:03 -- scripts/common.sh@352 -- # local d=1 00:19:30.352 11:47:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.352 11:47:03 -- scripts/common.sh@354 -- # echo 1 00:19:30.352 11:47:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:30.352 11:47:03 -- scripts/common.sh@365 -- # decimal 2 00:19:30.352 11:47:03 -- scripts/common.sh@352 -- # local d=2 00:19:30.352 11:47:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.352 11:47:03 -- scripts/common.sh@354 -- # echo 2 00:19:30.352 11:47:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:30.352 11:47:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:30.352 11:47:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:30.352 11:47:03 -- scripts/common.sh@367 -- # return 0 00:19:30.352 11:47:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.352 11:47:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.352 --rc genhtml_branch_coverage=1 00:19:30.352 --rc genhtml_function_coverage=1 00:19:30.352 --rc genhtml_legend=1 00:19:30.352 --rc geninfo_all_blocks=1 00:19:30.352 --rc geninfo_unexecuted_blocks=1 00:19:30.352 00:19:30.352 ' 00:19:30.352 11:47:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.352 --rc genhtml_branch_coverage=1 00:19:30.352 --rc genhtml_function_coverage=1 00:19:30.352 --rc genhtml_legend=1 00:19:30.352 --rc geninfo_all_blocks=1 00:19:30.352 --rc geninfo_unexecuted_blocks=1 00:19:30.352 00:19:30.352 ' 00:19:30.352 11:47:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.352 --rc genhtml_branch_coverage=1 00:19:30.352 --rc genhtml_function_coverage=1 00:19:30.352 --rc genhtml_legend=1 00:19:30.352 --rc geninfo_all_blocks=1 00:19:30.352 --rc geninfo_unexecuted_blocks=1 00:19:30.352 00:19:30.352 ' 00:19:30.352 11:47:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.352 --rc genhtml_branch_coverage=1 00:19:30.352 --rc genhtml_function_coverage=1 00:19:30.352 --rc genhtml_legend=1 00:19:30.352 --rc geninfo_all_blocks=1 00:19:30.352 --rc geninfo_unexecuted_blocks=1 00:19:30.352 00:19:30.352 ' 00:19:30.352 11:47:03 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.352 11:47:03 -- nvmf/common.sh@7 -- # uname -s 00:19:30.352 11:47:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.352 11:47:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.352 11:47:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.352 11:47:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.352 11:47:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.352 11:47:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.352 11:47:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.352 11:47:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.352 11:47:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.352 11:47:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.352 11:47:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:19:30.352 11:47:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:19:30.352 11:47:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.352 11:47:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.352 11:47:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.352 11:47:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.352 11:47:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.352 11:47:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.352 11:47:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.352 11:47:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.352 11:47:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.352 11:47:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.352 11:47:03 -- paths/export.sh@5 -- # export PATH 00:19:30.352 11:47:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.352 11:47:03 -- nvmf/common.sh@46 -- # : 0 00:19:30.352 11:47:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.352 11:47:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.352 11:47:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.352 11:47:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.352 11:47:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.352 11:47:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.352 11:47:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.352 11:47:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.352 11:47:03 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.352 11:47:03 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.352 11:47:03 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:30.352 11:47:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.352 11:47:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.352 11:47:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.352 11:47:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.352 11:47:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.352 11:47:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.352 11:47:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.352 11:47:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.352 11:47:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:30.352 11:47:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:30.352 11:47:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:30.353 11:47:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:30.353 11:47:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:30.353 11:47:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:30.353 11:47:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.353 11:47:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.353 11:47:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.353 11:47:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:30.353 11:47:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.353 11:47:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.353 11:47:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.353 11:47:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.353 11:47:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.353 11:47:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.353 11:47:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.353 11:47:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.353 11:47:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:30.353 11:47:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:30.353 Cannot find device "nvmf_tgt_br" 00:19:30.353 11:47:03 -- nvmf/common.sh@154 -- # true 00:19:30.353 11:47:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.353 Cannot find device "nvmf_tgt_br2" 00:19:30.353 11:47:03 -- nvmf/common.sh@155 -- # true 00:19:30.353 11:47:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:30.353 11:47:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:30.353 Cannot find device "nvmf_tgt_br" 00:19:30.353 11:47:03 -- nvmf/common.sh@157 -- # true 00:19:30.353 11:47:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:30.353 Cannot find device "nvmf_tgt_br2" 00:19:30.353 11:47:03 -- nvmf/common.sh@158 -- # true 00:19:30.353 11:47:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:30.613 11:47:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:30.613 11:47:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.613 11:47:03 -- nvmf/common.sh@161 -- # true 00:19:30.613 11:47:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.613 11:47:03 -- nvmf/common.sh@162 -- # true 00:19:30.613 11:47:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.613 11:47:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.613 11:47:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.613 11:47:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.613 11:47:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.613 11:47:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.613 11:47:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.613 11:47:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.613 11:47:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.613 11:47:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:30.613 11:47:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:30.613 11:47:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:30.613 11:47:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:30.613 11:47:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.613 11:47:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.613 11:47:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.613 11:47:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:30.613 11:47:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:30.613 11:47:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.613 11:47:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.613 11:47:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.613 11:47:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.613 11:47:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.613 11:47:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:30.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:19:30.613 00:19:30.613 --- 10.0.0.2 ping statistics --- 00:19:30.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.613 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:19:30.613 11:47:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:30.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:19:30.613 00:19:30.613 --- 10.0.0.3 ping statistics --- 00:19:30.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.613 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:19:30.613 11:47:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:19:30.613 00:19:30.613 --- 10.0.0.1 ping statistics --- 00:19:30.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.613 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:30.613 11:47:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.613 11:47:03 -- nvmf/common.sh@421 -- # return 0 00:19:30.613 11:47:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:30.613 11:47:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.613 11:47:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:30.613 11:47:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:30.613 11:47:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.613 11:47:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:30.613 11:47:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:30.873 11:47:03 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:30.873 11:47:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:30.873 11:47:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.873 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:30.873 11:47:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:30.873 11:47:03 -- nvmf/common.sh@469 -- # nvmfpid=74378 00:19:30.873 11:47:03 -- nvmf/common.sh@470 -- # waitforlisten 74378 00:19:30.873 11:47:03 -- common/autotest_common.sh@829 -- # '[' -z 74378 ']' 00:19:30.873 11:47:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.873 11:47:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.873 11:47:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.873 11:47:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.873 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:30.873 [2024-11-20 11:47:03.737812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:30.873 [2024-11-20 11:47:03.737882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.873 [2024-11-20 11:47:03.879468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.134 [2024-11-20 11:47:03.969823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:31.134 [2024-11-20 11:47:03.969956] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.134 [2024-11-20 11:47:03.969963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.134 [2024-11-20 11:47:03.969968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.134 [2024-11-20 11:47:03.970171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.134 [2024-11-20 11:47:03.970444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.134 [2024-11-20 11:47:03.970509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.134 [2024-11-20 11:47:03.970515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.704 11:47:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.705 11:47:04 -- common/autotest_common.sh@862 -- # return 0 00:19:31.705 11:47:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:31.705 11:47:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.705 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.705 11:47:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.705 11:47:04 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:31.705 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.705 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.705 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.705 11:47:04 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:31.705 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.705 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.705 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.705 11:47:04 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.705 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.705 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.965 [2024-11-20 11:47:04.746984] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.965 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.965 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.965 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.965 Malloc0 00:19:31.965 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.965 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.965 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.965 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.965 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.965 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.965 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.965 11:47:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.965 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.965 [2024-11-20 11:47:04.809498] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.965 11:47:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74431 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # config=() 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.965 11:47:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@30 -- # READ_PID=74433 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.965 { 00:19:31.965 "params": { 00:19:31.965 "name": "Nvme$subsystem", 00:19:31.965 "trtype": "$TEST_TRANSPORT", 00:19:31.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.965 "adrfam": "ipv4", 00:19:31.965 "trsvcid": "$NVMF_PORT", 00:19:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.965 "hdgst": ${hdgst:-false}, 00:19:31.965 "ddgst": ${ddgst:-false} 00:19:31.965 }, 00:19:31.965 "method": "bdev_nvme_attach_controller" 00:19:31.965 } 00:19:31.965 EOF 00:19:31.965 )") 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # config=() 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.965 11:47:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.965 { 00:19:31.965 "params": { 00:19:31.965 "name": "Nvme$subsystem", 00:19:31.965 "trtype": "$TEST_TRANSPORT", 00:19:31.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.965 "adrfam": "ipv4", 00:19:31.965 "trsvcid": "$NVMF_PORT", 00:19:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.965 "hdgst": ${hdgst:-false}, 00:19:31.965 "ddgst": ${ddgst:-false} 00:19:31.965 }, 00:19:31.965 "method": "bdev_nvme_attach_controller" 00:19:31.965 } 00:19:31.965 EOF 00:19:31.965 )") 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # cat 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74436 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # config=() 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # cat 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.965 11:47:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.965 { 00:19:31.965 "params": { 00:19:31.965 "name": "Nvme$subsystem", 00:19:31.965 "trtype": "$TEST_TRANSPORT", 00:19:31.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.965 "adrfam": "ipv4", 00:19:31.965 "trsvcid": "$NVMF_PORT", 00:19:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.965 "hdgst": ${hdgst:-false}, 00:19:31.965 "ddgst": ${ddgst:-false} 00:19:31.965 }, 00:19:31.965 "method": "bdev_nvme_attach_controller" 00:19:31.965 } 00:19:31.965 EOF 00:19:31.965 )") 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74440 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@35 -- # sync 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # cat 00:19:31.965 11:47:04 -- nvmf/common.sh@544 -- # jq . 00:19:31.965 11:47:04 -- nvmf/common.sh@544 -- # jq . 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:31.965 11:47:04 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # config=() 00:19:31.965 11:47:04 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.965 11:47:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.965 { 00:19:31.965 "params": { 00:19:31.965 "name": "Nvme$subsystem", 00:19:31.965 "trtype": "$TEST_TRANSPORT", 00:19:31.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.965 "adrfam": "ipv4", 00:19:31.965 "trsvcid": "$NVMF_PORT", 00:19:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.965 "hdgst": ${hdgst:-false}, 00:19:31.965 "ddgst": ${ddgst:-false} 00:19:31.965 }, 00:19:31.965 "method": "bdev_nvme_attach_controller" 00:19:31.965 } 00:19:31.965 EOF 00:19:31.965 )") 00:19:31.965 11:47:04 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.965 11:47:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.965 "params": { 00:19:31.965 "name": "Nvme1", 00:19:31.965 "trtype": "tcp", 00:19:31.965 "traddr": "10.0.0.2", 00:19:31.965 "adrfam": "ipv4", 00:19:31.965 "trsvcid": "4420", 00:19:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.965 "hdgst": false, 00:19:31.965 "ddgst": false 00:19:31.965 }, 00:19:31.965 "method": "bdev_nvme_attach_controller" 00:19:31.965 }' 00:19:31.965 11:47:04 -- nvmf/common.sh@542 -- # cat 00:19:31.965 11:47:04 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.966 11:47:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.966 "params": { 00:19:31.966 "name": "Nvme1", 00:19:31.966 "trtype": "tcp", 00:19:31.966 "traddr": "10.0.0.2", 00:19:31.966 "adrfam": "ipv4", 00:19:31.966 "trsvcid": "4420", 00:19:31.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.966 "hdgst": false, 00:19:31.966 "ddgst": false 00:19:31.966 }, 00:19:31.966 "method": "bdev_nvme_attach_controller" 00:19:31.966 }' 00:19:31.966 11:47:04 -- nvmf/common.sh@544 -- # jq . 00:19:31.966 11:47:04 -- nvmf/common.sh@544 -- # jq . 00:19:31.966 11:47:04 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.966 11:47:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.966 "params": { 00:19:31.966 "name": "Nvme1", 00:19:31.966 "trtype": "tcp", 00:19:31.966 "traddr": "10.0.0.2", 00:19:31.966 "adrfam": "ipv4", 00:19:31.966 "trsvcid": "4420", 00:19:31.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.966 "hdgst": false, 00:19:31.966 "ddgst": false 00:19:31.966 }, 00:19:31.966 "method": "bdev_nvme_attach_controller" 00:19:31.966 }' 00:19:31.966 11:47:04 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.966 11:47:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.966 "params": { 00:19:31.966 "name": "Nvme1", 00:19:31.966 "trtype": "tcp", 00:19:31.966 "traddr": "10.0.0.2", 00:19:31.966 "adrfam": "ipv4", 00:19:31.966 "trsvcid": "4420", 00:19:31.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.966 "hdgst": false, 00:19:31.966 "ddgst": false 00:19:31.966 }, 00:19:31.966 "method": "bdev_nvme_attach_controller" 00:19:31.966 }' 00:19:31.966 [2024-11-20 11:47:04.881646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.966 [2024-11-20 11:47:04.881723] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:31.966 [2024-11-20 11:47:04.883797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.966 [2024-11-20 11:47:04.883847] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:31.966 [2024-11-20 11:47:04.886030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.966 [2024-11-20 11:47:04.886285] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:31.966 11:47:04 -- target/bdev_io_wait.sh@37 -- # wait 74431 00:19:31.966 [2024-11-20 11:47:04.893873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.966 [2024-11-20 11:47:04.893965] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:32.226 [2024-11-20 11:47:05.108134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.226 [2024-11-20 11:47:05.215372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:32.226 [2024-11-20 11:47:05.238222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.486 [2024-11-20 11:47:05.333697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.486 [2024-11-20 11:47:05.347954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:32.486 [2024-11-20 11:47:05.411392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.486 Running I/O for 1 seconds... 00:19:32.486 [2024-11-20 11:47:05.439299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:32.486 Running I/O for 1 seconds... 00:19:32.486 [2024-11-20 11:47:05.522185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:32.746 Running I/O for 1 seconds... 00:19:32.746 Running I/O for 1 seconds... 00:19:33.681 00:19:33.681 Latency(us) 00:19:33.681 [2024-11-20T11:47:06.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.681 [2024-11-20T11:47:06.724Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:33.681 Nvme1n1 : 1.02 8357.72 32.65 0.00 0.00 15121.97 6410.51 30449.91 00:19:33.681 [2024-11-20T11:47:06.724Z] =================================================================================================================== 00:19:33.681 [2024-11-20T11:47:06.724Z] Total : 8357.72 32.65 0.00 0.00 15121.97 6410.51 30449.91 00:19:33.681 00:19:33.681 Latency(us) 00:19:33.681 [2024-11-20T11:47:06.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.681 [2024-11-20T11:47:06.724Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:33.681 Nvme1n1 : 1.01 9214.19 35.99 0.00 0.00 13823.02 9501.29 23924.93 00:19:33.681 [2024-11-20T11:47:06.724Z] =================================================================================================================== 00:19:33.681 [2024-11-20T11:47:06.724Z] Total : 9214.19 35.99 0.00 0.00 13823.02 9501.29 23924.93 00:19:33.681 00:19:33.681 Latency(us) 00:19:33.681 [2024-11-20T11:47:06.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.681 [2024-11-20T11:47:06.724Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:33.681 Nvme1n1 : 1.00 249154.71 973.26 0.00 0.00 511.80 199.43 2332.39 00:19:33.681 [2024-11-20T11:47:06.724Z] =================================================================================================================== 00:19:33.681 [2024-11-20T11:47:06.724Z] Total : 249154.71 973.26 0.00 0.00 511.80 199.43 2332.39 00:19:33.681 00:19:33.681 Latency(us) 00:19:33.681 [2024-11-20T11:47:06.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.681 [2024-11-20T11:47:06.724Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:33.681 Nvme1n1 : 1.00 8700.02 33.98 0.00 0.00 14675.83 4435.84 40294.62 00:19:33.681 [2024-11-20T11:47:06.724Z] =================================================================================================================== 00:19:33.681 [2024-11-20T11:47:06.724Z] Total : 8700.02 33.98 0.00 0.00 14675.83 4435.84 40294.62 00:19:34.250 11:47:07 -- target/bdev_io_wait.sh@38 -- # wait 74433 00:19:34.250 11:47:07 -- target/bdev_io_wait.sh@39 -- # wait 74436 00:19:34.250 11:47:07 -- target/bdev_io_wait.sh@40 -- # wait 74440 00:19:34.250 11:47:07 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.250 11:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.250 11:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:34.250 11:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.250 11:47:07 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:34.250 11:47:07 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:34.250 11:47:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:34.250 11:47:07 -- nvmf/common.sh@116 -- # sync 00:19:34.250 11:47:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:34.250 11:47:07 -- nvmf/common.sh@119 -- # set +e 00:19:34.250 11:47:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:34.250 11:47:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:34.250 rmmod nvme_tcp 00:19:34.250 rmmod nvme_fabrics 00:19:34.250 rmmod nvme_keyring 00:19:34.250 11:47:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:34.250 11:47:07 -- nvmf/common.sh@123 -- # set -e 00:19:34.250 11:47:07 -- nvmf/common.sh@124 -- # return 0 00:19:34.250 11:47:07 -- nvmf/common.sh@477 -- # '[' -n 74378 ']' 00:19:34.250 11:47:07 -- nvmf/common.sh@478 -- # killprocess 74378 00:19:34.250 11:47:07 -- common/autotest_common.sh@936 -- # '[' -z 74378 ']' 00:19:34.250 11:47:07 -- common/autotest_common.sh@940 -- # kill -0 74378 00:19:34.250 11:47:07 -- common/autotest_common.sh@941 -- # uname 00:19:34.250 11:47:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.250 11:47:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74378 00:19:34.250 11:47:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:34.250 11:47:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:34.250 killing process with pid 74378 00:19:34.250 11:47:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74378' 00:19:34.250 11:47:07 -- common/autotest_common.sh@955 -- # kill 74378 00:19:34.250 11:47:07 -- common/autotest_common.sh@960 -- # wait 74378 00:19:34.511 11:47:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:34.511 11:47:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:34.511 11:47:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:34.511 11:47:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.511 11:47:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:34.511 11:47:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.511 11:47:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.511 11:47:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.511 11:47:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:34.770 00:19:34.771 real 0m4.580s 00:19:34.771 user 0m19.694s 00:19:34.771 sys 0m2.188s 00:19:34.771 11:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:34.771 11:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:34.771 ************************************ 00:19:34.771 END TEST nvmf_bdev_io_wait 00:19:34.771 ************************************ 00:19:34.771 11:47:07 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:34.771 11:47:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:34.771 11:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:34.771 11:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:34.771 ************************************ 00:19:34.771 START TEST nvmf_queue_depth 00:19:34.771 ************************************ 00:19:34.771 11:47:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:34.771 * Looking for test storage... 00:19:34.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:34.771 11:47:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:34.771 11:47:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:34.771 11:47:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:35.032 11:47:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:35.032 11:47:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:35.032 11:47:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:35.032 11:47:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:35.032 11:47:07 -- scripts/common.sh@335 -- # IFS=.-: 00:19:35.032 11:47:07 -- scripts/common.sh@335 -- # read -ra ver1 00:19:35.032 11:47:07 -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.032 11:47:07 -- scripts/common.sh@336 -- # read -ra ver2 00:19:35.032 11:47:07 -- scripts/common.sh@337 -- # local 'op=<' 00:19:35.032 11:47:07 -- scripts/common.sh@339 -- # ver1_l=2 00:19:35.032 11:47:07 -- scripts/common.sh@340 -- # ver2_l=1 00:19:35.032 11:47:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:35.032 11:47:07 -- scripts/common.sh@343 -- # case "$op" in 00:19:35.032 11:47:07 -- scripts/common.sh@344 -- # : 1 00:19:35.032 11:47:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:35.032 11:47:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.032 11:47:07 -- scripts/common.sh@364 -- # decimal 1 00:19:35.032 11:47:07 -- scripts/common.sh@352 -- # local d=1 00:19:35.032 11:47:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.032 11:47:07 -- scripts/common.sh@354 -- # echo 1 00:19:35.032 11:47:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:35.032 11:47:07 -- scripts/common.sh@365 -- # decimal 2 00:19:35.032 11:47:07 -- scripts/common.sh@352 -- # local d=2 00:19:35.032 11:47:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.032 11:47:07 -- scripts/common.sh@354 -- # echo 2 00:19:35.032 11:47:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:35.032 11:47:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:35.032 11:47:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:35.032 11:47:07 -- scripts/common.sh@367 -- # return 0 00:19:35.032 11:47:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.032 11:47:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.032 --rc genhtml_branch_coverage=1 00:19:35.032 --rc genhtml_function_coverage=1 00:19:35.032 --rc genhtml_legend=1 00:19:35.032 --rc geninfo_all_blocks=1 00:19:35.032 --rc geninfo_unexecuted_blocks=1 00:19:35.032 00:19:35.032 ' 00:19:35.032 11:47:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.032 --rc genhtml_branch_coverage=1 00:19:35.032 --rc genhtml_function_coverage=1 00:19:35.032 --rc genhtml_legend=1 00:19:35.032 --rc geninfo_all_blocks=1 00:19:35.032 --rc geninfo_unexecuted_blocks=1 00:19:35.032 00:19:35.032 ' 00:19:35.032 11:47:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.032 --rc genhtml_branch_coverage=1 00:19:35.032 --rc genhtml_function_coverage=1 00:19:35.032 --rc genhtml_legend=1 00:19:35.032 --rc geninfo_all_blocks=1 00:19:35.032 --rc geninfo_unexecuted_blocks=1 00:19:35.032 00:19:35.032 ' 00:19:35.032 11:47:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:35.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.032 --rc genhtml_branch_coverage=1 00:19:35.032 --rc genhtml_function_coverage=1 00:19:35.032 --rc genhtml_legend=1 00:19:35.032 --rc geninfo_all_blocks=1 00:19:35.032 --rc geninfo_unexecuted_blocks=1 00:19:35.032 00:19:35.032 ' 00:19:35.032 11:47:07 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.032 11:47:07 -- nvmf/common.sh@7 -- # uname -s 00:19:35.032 11:47:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.032 11:47:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.032 11:47:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.032 11:47:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.032 11:47:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.032 11:47:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.032 11:47:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.032 11:47:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.032 11:47:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.032 11:47:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.033 11:47:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:19:35.033 11:47:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:19:35.033 11:47:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.033 11:47:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.033 11:47:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.033 11:47:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.033 11:47:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.033 11:47:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.033 11:47:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.033 11:47:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.033 11:47:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.033 11:47:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.033 11:47:07 -- paths/export.sh@5 -- # export PATH 00:19:35.033 11:47:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.033 11:47:07 -- nvmf/common.sh@46 -- # : 0 00:19:35.033 11:47:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.033 11:47:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.033 11:47:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.033 11:47:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.033 11:47:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.033 11:47:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.033 11:47:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.033 11:47:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.033 11:47:07 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:35.033 11:47:07 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:35.033 11:47:07 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.033 11:47:07 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:35.033 11:47:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:35.033 11:47:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.033 11:47:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.033 11:47:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.033 11:47:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.033 11:47:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.033 11:47:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.033 11:47:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.033 11:47:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:35.033 11:47:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:35.033 11:47:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:35.033 11:47:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:35.033 11:47:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:35.033 11:47:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:35.033 11:47:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.033 11:47:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.033 11:47:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:35.033 11:47:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:35.033 11:47:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.033 11:47:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.033 11:47:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.033 11:47:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.033 11:47:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.033 11:47:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.033 11:47:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.033 11:47:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.033 11:47:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:35.033 11:47:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:35.033 Cannot find device "nvmf_tgt_br" 00:19:35.033 11:47:07 -- nvmf/common.sh@154 -- # true 00:19:35.033 11:47:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.033 Cannot find device "nvmf_tgt_br2" 00:19:35.033 11:47:07 -- nvmf/common.sh@155 -- # true 00:19:35.033 11:47:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:35.033 11:47:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:35.033 Cannot find device "nvmf_tgt_br" 00:19:35.033 11:47:07 -- nvmf/common.sh@157 -- # true 00:19:35.033 11:47:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:35.033 Cannot find device "nvmf_tgt_br2" 00:19:35.033 11:47:07 -- nvmf/common.sh@158 -- # true 00:19:35.033 11:47:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:35.033 11:47:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:35.033 11:47:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.033 11:47:08 -- nvmf/common.sh@161 -- # true 00:19:35.033 11:47:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.033 11:47:08 -- nvmf/common.sh@162 -- # true 00:19:35.033 11:47:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.033 11:47:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.033 11:47:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.294 11:47:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.294 11:47:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:35.294 11:47:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:35.294 11:47:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:35.294 11:47:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:35.294 11:47:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:35.294 11:47:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:35.294 11:47:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:35.294 11:47:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:35.294 11:47:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:35.294 11:47:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:35.294 11:47:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:35.294 11:47:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:35.294 11:47:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:35.294 11:47:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:35.294 11:47:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:35.294 11:47:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:35.294 11:47:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:35.294 11:47:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:35.294 11:47:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:35.294 11:47:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:35.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:35.294 00:19:35.294 --- 10.0.0.2 ping statistics --- 00:19:35.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.294 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:35.294 11:47:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:35.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:35.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:35.294 00:19:35.294 --- 10.0.0.3 ping statistics --- 00:19:35.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.294 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:35.294 11:47:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:35.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:35.294 00:19:35.294 --- 10.0.0.1 ping statistics --- 00:19:35.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.294 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:35.294 11:47:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.294 11:47:08 -- nvmf/common.sh@421 -- # return 0 00:19:35.294 11:47:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:35.294 11:47:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.294 11:47:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:35.294 11:47:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:35.294 11:47:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.294 11:47:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:35.294 11:47:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:35.294 11:47:08 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:35.294 11:47:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:35.294 11:47:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.294 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:35.294 11:47:08 -- nvmf/common.sh@469 -- # nvmfpid=74678 00:19:35.294 11:47:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:35.294 11:47:08 -- nvmf/common.sh@470 -- # waitforlisten 74678 00:19:35.294 11:47:08 -- common/autotest_common.sh@829 -- # '[' -z 74678 ']' 00:19:35.294 11:47:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.294 11:47:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.294 11:47:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.294 11:47:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.294 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:35.294 [2024-11-20 11:47:08.267936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:35.294 [2024-11-20 11:47:08.267991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.554 [2024-11-20 11:47:08.406174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.554 [2024-11-20 11:47:08.492819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.554 [2024-11-20 11:47:08.492960] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.554 [2024-11-20 11:47:08.492967] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.554 [2024-11-20 11:47:08.492972] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.554 [2024-11-20 11:47:08.492995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.124 11:47:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.124 11:47:09 -- common/autotest_common.sh@862 -- # return 0 00:19:36.124 11:47:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:36.124 11:47:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.125 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 11:47:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.383 11:47:09 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.383 11:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.383 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 [2024-11-20 11:47:09.206405] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.383 11:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.383 11:47:09 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.383 11:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.383 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 Malloc0 00:19:36.383 11:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.383 11:47:09 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.383 11:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.383 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 11:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.383 11:47:09 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.383 11:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.383 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 11:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.383 11:47:09 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.383 11:47:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.383 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 [2024-11-20 11:47:09.265945] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.383 11:47:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.383 11:47:09 -- target/queue_depth.sh@30 -- # bdevperf_pid=74728 00:19:36.383 11:47:09 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.383 11:47:09 -- target/queue_depth.sh@33 -- # waitforlisten 74728 /var/tmp/bdevperf.sock 00:19:36.383 11:47:09 -- common/autotest_common.sh@829 -- # '[' -z 74728 ']' 00:19:36.383 11:47:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.383 11:47:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.383 11:47:09 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:36.383 11:47:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.383 11:47:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.383 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:36.383 [2024-11-20 11:47:09.323150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:36.383 [2024-11-20 11:47:09.323264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74728 ] 00:19:36.644 [2024-11-20 11:47:09.444304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.644 [2024-11-20 11:47:09.531825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.235 11:47:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.235 11:47:10 -- common/autotest_common.sh@862 -- # return 0 00:19:37.235 11:47:10 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.235 11:47:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.235 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:37.235 NVMe0n1 00:19:37.235 11:47:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.235 11:47:10 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:37.495 Running I/O for 10 seconds... 00:19:47.477 00:19:47.477 Latency(us) 00:19:47.477 [2024-11-20T11:47:20.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.477 [2024-11-20T11:47:20.520Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:47.477 Verification LBA range: start 0x0 length 0x4000 00:19:47.477 NVMe0n1 : 10.05 18094.35 70.68 0.00 0.00 56429.97 11275.63 64562.98 00:19:47.477 [2024-11-20T11:47:20.520Z] =================================================================================================================== 00:19:47.477 [2024-11-20T11:47:20.520Z] Total : 18094.35 70.68 0.00 0.00 56429.97 11275.63 64562.98 00:19:47.477 0 00:19:47.477 11:47:20 -- target/queue_depth.sh@39 -- # killprocess 74728 00:19:47.477 11:47:20 -- common/autotest_common.sh@936 -- # '[' -z 74728 ']' 00:19:47.477 11:47:20 -- common/autotest_common.sh@940 -- # kill -0 74728 00:19:47.477 11:47:20 -- common/autotest_common.sh@941 -- # uname 00:19:47.477 11:47:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.477 11:47:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74728 00:19:47.477 11:47:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.477 11:47:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.477 killing process with pid 74728 00:19:47.477 11:47:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74728' 00:19:47.477 11:47:20 -- common/autotest_common.sh@955 -- # kill 74728 00:19:47.477 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.477 00:19:47.477 Latency(us) 00:19:47.477 [2024-11-20T11:47:20.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.477 [2024-11-20T11:47:20.520Z] =================================================================================================================== 00:19:47.477 [2024-11-20T11:47:20.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.477 11:47:20 -- common/autotest_common.sh@960 -- # wait 74728 00:19:47.735 11:47:20 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:47.735 11:47:20 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:47.735 11:47:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.735 11:47:20 -- nvmf/common.sh@116 -- # sync 00:19:47.735 11:47:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.735 11:47:20 -- nvmf/common.sh@119 -- # set +e 00:19:47.735 11:47:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.735 11:47:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.735 rmmod nvme_tcp 00:19:47.735 rmmod nvme_fabrics 00:19:47.735 rmmod nvme_keyring 00:19:47.994 11:47:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.994 11:47:20 -- nvmf/common.sh@123 -- # set -e 00:19:47.994 11:47:20 -- nvmf/common.sh@124 -- # return 0 00:19:47.994 11:47:20 -- nvmf/common.sh@477 -- # '[' -n 74678 ']' 00:19:47.994 11:47:20 -- nvmf/common.sh@478 -- # killprocess 74678 00:19:47.994 11:47:20 -- common/autotest_common.sh@936 -- # '[' -z 74678 ']' 00:19:47.994 11:47:20 -- common/autotest_common.sh@940 -- # kill -0 74678 00:19:47.994 11:47:20 -- common/autotest_common.sh@941 -- # uname 00:19:47.994 11:47:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.994 11:47:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74678 00:19:47.994 11:47:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:47.994 11:47:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:47.994 11:47:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74678' 00:19:47.994 killing process with pid 74678 00:19:47.994 11:47:20 -- common/autotest_common.sh@955 -- # kill 74678 00:19:47.994 11:47:20 -- common/autotest_common.sh@960 -- # wait 74678 00:19:48.253 11:47:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:48.253 11:47:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:48.253 11:47:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:48.253 11:47:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.253 11:47:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:48.253 11:47:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.254 11:47:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.254 11:47:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.254 11:47:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:48.254 00:19:48.254 real 0m13.628s 00:19:48.254 user 0m22.423s 00:19:48.254 sys 0m2.509s 00:19:48.254 11:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:48.254 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:48.254 ************************************ 00:19:48.254 END TEST nvmf_queue_depth 00:19:48.254 ************************************ 00:19:48.513 11:47:21 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:48.513 11:47:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:48.513 11:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.513 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:48.513 ************************************ 00:19:48.513 START TEST nvmf_multipath 00:19:48.513 ************************************ 00:19:48.513 11:47:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:48.513 * Looking for test storage... 00:19:48.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:48.513 11:47:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:48.513 11:47:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:48.513 11:47:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:48.513 11:47:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:48.513 11:47:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:48.513 11:47:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:48.513 11:47:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:48.513 11:47:21 -- scripts/common.sh@335 -- # IFS=.-: 00:19:48.513 11:47:21 -- scripts/common.sh@335 -- # read -ra ver1 00:19:48.513 11:47:21 -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.513 11:47:21 -- scripts/common.sh@336 -- # read -ra ver2 00:19:48.513 11:47:21 -- scripts/common.sh@337 -- # local 'op=<' 00:19:48.513 11:47:21 -- scripts/common.sh@339 -- # ver1_l=2 00:19:48.513 11:47:21 -- scripts/common.sh@340 -- # ver2_l=1 00:19:48.513 11:47:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:48.513 11:47:21 -- scripts/common.sh@343 -- # case "$op" in 00:19:48.513 11:47:21 -- scripts/common.sh@344 -- # : 1 00:19:48.513 11:47:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:48.513 11:47:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.513 11:47:21 -- scripts/common.sh@364 -- # decimal 1 00:19:48.513 11:47:21 -- scripts/common.sh@352 -- # local d=1 00:19:48.513 11:47:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.513 11:47:21 -- scripts/common.sh@354 -- # echo 1 00:19:48.513 11:47:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:48.513 11:47:21 -- scripts/common.sh@365 -- # decimal 2 00:19:48.513 11:47:21 -- scripts/common.sh@352 -- # local d=2 00:19:48.513 11:47:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.513 11:47:21 -- scripts/common.sh@354 -- # echo 2 00:19:48.513 11:47:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:48.513 11:47:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:48.513 11:47:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:48.513 11:47:21 -- scripts/common.sh@367 -- # return 0 00:19:48.513 11:47:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.513 11:47:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.513 --rc genhtml_branch_coverage=1 00:19:48.513 --rc genhtml_function_coverage=1 00:19:48.513 --rc genhtml_legend=1 00:19:48.513 --rc geninfo_all_blocks=1 00:19:48.513 --rc geninfo_unexecuted_blocks=1 00:19:48.513 00:19:48.513 ' 00:19:48.513 11:47:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.513 --rc genhtml_branch_coverage=1 00:19:48.513 --rc genhtml_function_coverage=1 00:19:48.513 --rc genhtml_legend=1 00:19:48.513 --rc geninfo_all_blocks=1 00:19:48.513 --rc geninfo_unexecuted_blocks=1 00:19:48.513 00:19:48.513 ' 00:19:48.513 11:47:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.513 --rc genhtml_branch_coverage=1 00:19:48.513 --rc genhtml_function_coverage=1 00:19:48.513 --rc genhtml_legend=1 00:19:48.513 --rc geninfo_all_blocks=1 00:19:48.513 --rc geninfo_unexecuted_blocks=1 00:19:48.513 00:19:48.513 ' 00:19:48.513 11:47:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.513 --rc genhtml_branch_coverage=1 00:19:48.513 --rc genhtml_function_coverage=1 00:19:48.513 --rc genhtml_legend=1 00:19:48.513 --rc geninfo_all_blocks=1 00:19:48.513 --rc geninfo_unexecuted_blocks=1 00:19:48.513 00:19:48.513 ' 00:19:48.513 11:47:21 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.513 11:47:21 -- nvmf/common.sh@7 -- # uname -s 00:19:48.513 11:47:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.513 11:47:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.513 11:47:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.513 11:47:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.513 11:47:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.513 11:47:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.513 11:47:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.513 11:47:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.513 11:47:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.513 11:47:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.513 11:47:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:19:48.513 11:47:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:19:48.513 11:47:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.513 11:47:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.513 11:47:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.513 11:47:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.772 11:47:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.772 11:47:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.772 11:47:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.772 11:47:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.772 11:47:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.772 11:47:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.772 11:47:21 -- paths/export.sh@5 -- # export PATH 00:19:48.772 11:47:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.772 11:47:21 -- nvmf/common.sh@46 -- # : 0 00:19:48.772 11:47:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.772 11:47:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.772 11:47:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.772 11:47:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.772 11:47:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.772 11:47:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.772 11:47:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.772 11:47:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.772 11:47:21 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.772 11:47:21 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.772 11:47:21 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:48.772 11:47:21 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:48.772 11:47:21 -- target/multipath.sh@43 -- # nvmftestinit 00:19:48.772 11:47:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:48.772 11:47:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.772 11:47:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.772 11:47:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.772 11:47:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.772 11:47:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.772 11:47:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.772 11:47:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.772 11:47:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:48.772 11:47:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:48.772 11:47:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:48.772 11:47:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:48.772 11:47:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:48.772 11:47:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:48.772 11:47:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.773 11:47:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.773 11:47:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:48.773 11:47:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:48.773 11:47:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:48.773 11:47:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:48.773 11:47:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:48.773 11:47:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.773 11:47:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:48.773 11:47:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:48.773 11:47:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:48.773 11:47:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:48.773 11:47:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:48.773 11:47:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:48.773 Cannot find device "nvmf_tgt_br" 00:19:48.773 11:47:21 -- nvmf/common.sh@154 -- # true 00:19:48.773 11:47:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:48.773 Cannot find device "nvmf_tgt_br2" 00:19:48.773 11:47:21 -- nvmf/common.sh@155 -- # true 00:19:48.773 11:47:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:48.773 11:47:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:48.773 Cannot find device "nvmf_tgt_br" 00:19:48.773 11:47:21 -- nvmf/common.sh@157 -- # true 00:19:48.773 11:47:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:48.773 Cannot find device "nvmf_tgt_br2" 00:19:48.773 11:47:21 -- nvmf/common.sh@158 -- # true 00:19:48.773 11:47:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:48.773 11:47:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:48.773 11:47:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.773 11:47:21 -- nvmf/common.sh@161 -- # true 00:19:48.773 11:47:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.773 11:47:21 -- nvmf/common.sh@162 -- # true 00:19:48.773 11:47:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.773 11:47:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.773 11:47:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.773 11:47:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.773 11:47:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.773 11:47:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.773 11:47:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.773 11:47:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.773 11:47:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.773 11:47:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:49.032 11:47:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:49.032 11:47:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:49.032 11:47:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:49.032 11:47:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.032 11:47:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.032 11:47:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.032 11:47:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:49.032 11:47:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:49.032 11:47:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.032 11:47:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.032 11:47:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.032 11:47:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.032 11:47:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.032 11:47:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:49.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:19:49.032 00:19:49.032 --- 10.0.0.2 ping statistics --- 00:19:49.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.032 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:19:49.032 11:47:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:49.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:19:49.032 00:19:49.032 --- 10.0.0.3 ping statistics --- 00:19:49.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.032 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:49.032 11:47:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:19:49.032 00:19:49.032 --- 10.0.0.1 ping statistics --- 00:19:49.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.032 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:49.032 11:47:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.032 11:47:21 -- nvmf/common.sh@421 -- # return 0 00:19:49.032 11:47:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.032 11:47:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.032 11:47:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.032 11:47:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.032 11:47:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.032 11:47:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.032 11:47:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.032 11:47:21 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:19:49.032 11:47:21 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:19:49.032 11:47:21 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:19:49.032 11:47:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.032 11:47:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.032 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:49.032 11:47:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.032 11:47:21 -- nvmf/common.sh@469 -- # nvmfpid=75060 00:19:49.032 11:47:21 -- nvmf/common.sh@470 -- # waitforlisten 75060 00:19:49.032 11:47:21 -- common/autotest_common.sh@829 -- # '[' -z 75060 ']' 00:19:49.032 11:47:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.032 11:47:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.032 11:47:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.032 11:47:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.032 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:49.032 [2024-11-20 11:47:21.963084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:49.032 [2024-11-20 11:47:21.963135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.291 [2024-11-20 11:47:22.105322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.291 [2024-11-20 11:47:22.198190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.291 [2024-11-20 11:47:22.198295] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.291 [2024-11-20 11:47:22.198302] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.291 [2024-11-20 11:47:22.198307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.291 [2024-11-20 11:47:22.198536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.291 [2024-11-20 11:47:22.198765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.291 [2024-11-20 11:47:22.198918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.291 [2024-11-20 11:47:22.198920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.859 11:47:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.859 11:47:22 -- common/autotest_common.sh@862 -- # return 0 00:19:49.859 11:47:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.859 11:47:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.859 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:19:49.859 11:47:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.860 11:47:22 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:50.119 [2024-11-20 11:47:23.052981] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.119 11:47:23 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:50.377 Malloc0 00:19:50.377 11:47:23 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:19:50.636 11:47:23 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.896 11:47:23 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.896 [2024-11-20 11:47:23.913325] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.156 11:47:23 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:51.156 [2024-11-20 11:47:24.113591] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:51.156 11:47:24 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:19:51.415 11:47:24 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:19:51.675 11:47:24 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:19:51.675 11:47:24 -- common/autotest_common.sh@1187 -- # local i=0 00:19:51.675 11:47:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:51.675 11:47:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:51.675 11:47:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:53.612 11:47:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:53.612 11:47:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:53.612 11:47:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.612 11:47:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:53.612 11:47:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.612 11:47:26 -- common/autotest_common.sh@1197 -- # return 0 00:19:53.612 11:47:26 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:19:53.612 11:47:26 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:19:53.612 11:47:26 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:19:53.612 11:47:26 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:19:53.612 11:47:26 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:19:53.612 11:47:26 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:19:53.612 11:47:26 -- target/multipath.sh@38 -- # return 0 00:19:53.613 11:47:26 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:19:53.613 11:47:26 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:19:53.613 11:47:26 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:19:53.613 11:47:26 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:19:53.613 11:47:26 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:19:53.613 11:47:26 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:19:53.613 11:47:26 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:19:53.613 11:47:26 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:19:53.613 11:47:26 -- target/multipath.sh@22 -- # local timeout=20 00:19:53.613 11:47:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:53.613 11:47:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:53.613 11:47:26 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:53.613 11:47:26 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:19:53.613 11:47:26 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:19:53.613 11:47:26 -- target/multipath.sh@22 -- # local timeout=20 00:19:53.613 11:47:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:53.613 11:47:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:53.613 11:47:26 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:53.613 11:47:26 -- target/multipath.sh@85 -- # echo numa 00:19:53.613 11:47:26 -- target/multipath.sh@88 -- # fio_pid=75204 00:19:53.613 11:47:26 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:19:53.613 11:47:26 -- target/multipath.sh@90 -- # sleep 1 00:19:53.881 [global] 00:19:53.881 thread=1 00:19:53.881 invalidate=1 00:19:53.881 rw=randrw 00:19:53.881 time_based=1 00:19:53.881 runtime=6 00:19:53.881 ioengine=libaio 00:19:53.881 direct=1 00:19:53.881 bs=4096 00:19:53.881 iodepth=128 00:19:53.881 norandommap=0 00:19:53.881 numjobs=1 00:19:53.881 00:19:53.881 verify_dump=1 00:19:53.881 verify_backlog=512 00:19:53.881 verify_state_save=0 00:19:53.881 do_verify=1 00:19:53.881 verify=crc32c-intel 00:19:53.881 [job0] 00:19:53.881 filename=/dev/nvme0n1 00:19:53.881 Could not set queue depth (nvme0n1) 00:19:53.881 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:53.881 fio-3.35 00:19:53.881 Starting 1 thread 00:19:54.816 11:47:27 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:54.816 11:47:27 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:55.076 11:47:28 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:19:55.076 11:47:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:19:55.076 11:47:28 -- target/multipath.sh@22 -- # local timeout=20 00:19:55.076 11:47:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:55.076 11:47:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:55.076 11:47:28 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:55.076 11:47:28 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:19:55.076 11:47:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:19:55.076 11:47:28 -- target/multipath.sh@22 -- # local timeout=20 00:19:55.076 11:47:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:55.076 11:47:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:55.076 11:47:28 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:55.076 11:47:28 -- target/multipath.sh@25 -- # sleep 1s 00:19:56.454 11:47:29 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:56.454 11:47:29 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:56.454 11:47:29 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:56.454 11:47:29 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:19:56.454 11:47:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:19:56.454 11:47:29 -- target/multipath.sh@22 -- # local timeout=20 00:19:56.454 11:47:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:56.454 11:47:29 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:19:56.454 11:47:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:19:56.454 11:47:29 -- target/multipath.sh@22 -- # local timeout=20 00:19:56.454 11:47:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:56.454 11:47:29 -- target/multipath.sh@25 -- # sleep 1s 00:19:57.830 11:47:30 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:57.830 11:47:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:57.830 11:47:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:57.830 11:47:30 -- target/multipath.sh@104 -- # wait 75204 00:20:00.361 00:20:00.361 job0: (groupid=0, jobs=1): err= 0: pid=75226: Wed Nov 20 11:47:33 2024 00:20:00.361 read: IOPS=14.5k, BW=56.5MiB/s (59.2MB/s)(339MiB/6004msec) 00:20:00.361 slat (usec): min=3, max=9167, avg=36.55, stdev=145.59 00:20:00.361 clat (usec): min=672, max=44026, avg=6066.89, stdev=1220.98 00:20:00.361 lat (usec): min=690, max=44054, avg=6103.44, stdev=1222.98 00:20:00.361 clat percentiles (usec): 00:20:00.361 | 1.00th=[ 3916], 5.00th=[ 4555], 10.00th=[ 4817], 20.00th=[ 5211], 00:20:00.361 | 30.00th=[ 5538], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6194], 00:20:00.361 | 70.00th=[ 6390], 80.00th=[ 6652], 90.00th=[ 7242], 95.00th=[ 8029], 00:20:00.361 | 99.00th=[10159], 99.50th=[12125], 99.90th=[14615], 99.95th=[14877], 00:20:00.361 | 99.99th=[17171] 00:20:00.361 bw ( KiB/s): min=12000, max=39824, per=50.81%, avg=29374.91, stdev=8970.62, samples=11 00:20:00.361 iops : min= 3000, max= 9956, avg=7343.73, stdev=2242.66, samples=11 00:20:00.361 write: IOPS=8664, BW=33.8MiB/s (35.5MB/s)(174MiB/5142msec); 0 zone resets 00:20:00.361 slat (usec): min=14, max=2363, avg=51.32, stdev=94.07 00:20:00.361 clat (usec): min=443, max=14624, avg=5256.84, stdev=1084.74 00:20:00.361 lat (usec): min=540, max=15133, avg=5308.16, stdev=1086.47 00:20:00.361 clat percentiles (usec): 00:20:00.361 | 1.00th=[ 2966], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4621], 00:20:00.361 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5211], 60.00th=[ 5342], 00:20:00.361 | 70.00th=[ 5538], 80.00th=[ 5735], 90.00th=[ 6194], 95.00th=[ 6849], 00:20:00.362 | 99.00th=[ 9241], 99.50th=[10683], 99.90th=[13698], 99.95th=[13960], 00:20:00.362 | 99.99th=[14353] 00:20:00.362 bw ( KiB/s): min=12328, max=39056, per=84.82%, avg=29397.27, stdev=8622.76, samples=11 00:20:00.362 iops : min= 3082, max= 9764, avg=7349.27, stdev=2155.66, samples=11 00:20:00.362 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:20:00.362 lat (msec) : 2=0.13%, 4=3.00%, 10=95.89%, 20=0.94%, 50=0.01% 00:20:00.362 cpu : usr=6.53%, sys=33.25%, ctx=9593, majf=0, minf=163 00:20:00.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.362 issued rwts: total=86770,44552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.362 00:20:00.362 Run status group 0 (all jobs): 00:20:00.362 READ: bw=56.5MiB/s (59.2MB/s), 56.5MiB/s-56.5MiB/s (59.2MB/s-59.2MB/s), io=339MiB (355MB), run=6004-6004msec 00:20:00.362 WRITE: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=174MiB (182MB), run=5142-5142msec 00:20:00.362 00:20:00.362 Disk stats (read/write): 00:20:00.362 nvme0n1: ios=85699/43722, merge=0/0, ticks=464440/201152, in_queue=665592, util=98.70% 00:20:00.362 11:47:33 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:00.362 11:47:33 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:00.362 11:47:33 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:20:00.362 11:47:33 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:00.362 11:47:33 -- target/multipath.sh@22 -- # local timeout=20 00:20:00.362 11:47:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:00.362 11:47:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:00.362 11:47:33 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:00.362 11:47:33 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:20:00.362 11:47:33 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:00.362 11:47:33 -- target/multipath.sh@22 -- # local timeout=20 00:20:00.362 11:47:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:00.362 11:47:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:00.362 11:47:33 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:20:00.362 11:47:33 -- target/multipath.sh@25 -- # sleep 1s 00:20:01.762 11:47:34 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:01.762 11:47:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:01.762 11:47:34 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:01.762 11:47:34 -- target/multipath.sh@113 -- # echo round-robin 00:20:01.762 11:47:34 -- target/multipath.sh@116 -- # fio_pid=75357 00:20:01.762 11:47:34 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:01.762 11:47:34 -- target/multipath.sh@118 -- # sleep 1 00:20:01.762 [global] 00:20:01.762 thread=1 00:20:01.762 invalidate=1 00:20:01.762 rw=randrw 00:20:01.762 time_based=1 00:20:01.762 runtime=6 00:20:01.762 ioengine=libaio 00:20:01.762 direct=1 00:20:01.762 bs=4096 00:20:01.762 iodepth=128 00:20:01.762 norandommap=0 00:20:01.762 numjobs=1 00:20:01.762 00:20:01.762 verify_dump=1 00:20:01.762 verify_backlog=512 00:20:01.762 verify_state_save=0 00:20:01.762 do_verify=1 00:20:01.762 verify=crc32c-intel 00:20:01.762 [job0] 00:20:01.762 filename=/dev/nvme0n1 00:20:01.762 Could not set queue depth (nvme0n1) 00:20:01.762 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:01.762 fio-3.35 00:20:01.762 Starting 1 thread 00:20:02.698 11:47:35 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:02.698 11:47:35 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:02.957 11:47:35 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:20:02.957 11:47:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:02.957 11:47:35 -- target/multipath.sh@22 -- # local timeout=20 00:20:02.957 11:47:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:02.957 11:47:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:02.957 11:47:35 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:02.957 11:47:35 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:20:02.957 11:47:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:02.957 11:47:35 -- target/multipath.sh@22 -- # local timeout=20 00:20:02.957 11:47:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:02.957 11:47:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:02.957 11:47:35 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:02.957 11:47:35 -- target/multipath.sh@25 -- # sleep 1s 00:20:03.894 11:47:36 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:03.894 11:47:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:03.894 11:47:36 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:03.894 11:47:36 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:04.153 11:47:37 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:04.412 11:47:37 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:20:04.412 11:47:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:04.412 11:47:37 -- target/multipath.sh@22 -- # local timeout=20 00:20:04.412 11:47:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:04.412 11:47:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:04.412 11:47:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:04.412 11:47:37 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:20:04.412 11:47:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:04.412 11:47:37 -- target/multipath.sh@22 -- # local timeout=20 00:20:04.412 11:47:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:04.412 11:47:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:04.412 11:47:37 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:04.412 11:47:37 -- target/multipath.sh@25 -- # sleep 1s 00:20:05.348 11:47:38 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:05.348 11:47:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:05.348 11:47:38 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:05.348 11:47:38 -- target/multipath.sh@132 -- # wait 75357 00:20:07.889 00:20:07.889 job0: (groupid=0, jobs=1): err= 0: pid=75383: Wed Nov 20 11:47:40 2024 00:20:07.889 read: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(334MiB/6005msec) 00:20:07.889 slat (usec): min=3, max=4143, avg=33.94, stdev=138.07 00:20:07.889 clat (usec): min=251, max=20170, avg=6154.44, stdev=2217.40 00:20:07.889 lat (usec): min=264, max=20180, avg=6188.38, stdev=2217.94 00:20:07.889 clat percentiles (usec): 00:20:07.889 | 1.00th=[ 1090], 5.00th=[ 3621], 10.00th=[ 4555], 20.00th=[ 5080], 00:20:07.889 | 30.00th=[ 5407], 40.00th=[ 5669], 50.00th=[ 5932], 60.00th=[ 6194], 00:20:07.889 | 70.00th=[ 6390], 80.00th=[ 6718], 90.00th=[ 7701], 95.00th=[10028], 00:20:07.889 | 99.00th=[15795], 99.50th=[16909], 99.90th=[18220], 99.95th=[18744], 00:20:07.889 | 99.99th=[19268] 00:20:07.889 bw ( KiB/s): min=15168, max=37360, per=51.22%, avg=29147.82, stdev=7383.79, samples=11 00:20:07.889 iops : min= 3792, max= 9340, avg=7286.91, stdev=1845.99, samples=11 00:20:07.889 write: IOPS=8516, BW=33.3MiB/s (34.9MB/s)(174MiB/5228msec); 0 zone resets 00:20:07.889 slat (usec): min=7, max=2842, avg=47.58, stdev=85.93 00:20:07.889 clat (usec): min=180, max=17198, avg=5294.57, stdev=2147.37 00:20:07.889 lat (usec): min=225, max=17249, avg=5342.15, stdev=2147.98 00:20:07.889 clat percentiles (usec): 00:20:07.889 | 1.00th=[ 742], 5.00th=[ 2245], 10.00th=[ 3556], 20.00th=[ 4293], 00:20:07.889 | 30.00th=[ 4686], 40.00th=[ 4948], 50.00th=[ 5145], 60.00th=[ 5276], 00:20:07.889 | 70.00th=[ 5538], 80.00th=[ 5800], 90.00th=[ 6587], 95.00th=[ 9634], 00:20:07.889 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15533], 99.95th=[15795], 00:20:07.889 | 99.99th=[16581] 00:20:07.889 bw ( KiB/s): min=15496, max=36784, per=85.67%, avg=29187.73, stdev=6940.37, samples=11 00:20:07.889 iops : min= 3874, max= 9196, avg=7296.91, stdev=1735.11, samples=11 00:20:07.889 lat (usec) : 250=0.01%, 500=0.17%, 750=0.45%, 1000=0.60% 00:20:07.889 lat (msec) : 2=2.49%, 4=5.35%, 10=86.03%, 20=4.91%, 50=0.01% 00:20:07.889 cpu : usr=6.00%, sys=32.00%, ctx=10720, majf=0, minf=127 00:20:07.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:07.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:07.889 issued rwts: total=85426,44526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:07.889 00:20:07.889 Run status group 0 (all jobs): 00:20:07.889 READ: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=334MiB (350MB), run=6005-6005msec 00:20:07.889 WRITE: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=174MiB (182MB), run=5228-5228msec 00:20:07.889 00:20:07.889 Disk stats (read/write): 00:20:07.889 nvme0n1: ios=84577/43331, merge=0/0, ticks=469465/202915, in_queue=672380, util=98.55% 00:20:07.889 11:47:40 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:07.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:07.889 11:47:40 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:07.889 11:47:40 -- common/autotest_common.sh@1208 -- # local i=0 00:20:07.889 11:47:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:07.889 11:47:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:07.889 11:47:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:07.889 11:47:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:07.889 11:47:40 -- common/autotest_common.sh@1220 -- # return 0 00:20:07.889 11:47:40 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.149 11:47:41 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:20:08.149 11:47:41 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:20:08.149 11:47:41 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:08.149 11:47:41 -- target/multipath.sh@144 -- # nvmftestfini 00:20:08.149 11:47:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.149 11:47:41 -- nvmf/common.sh@116 -- # sync 00:20:08.149 11:47:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:08.149 11:47:41 -- nvmf/common.sh@119 -- # set +e 00:20:08.149 11:47:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.149 11:47:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:08.149 rmmod nvme_tcp 00:20:08.149 rmmod nvme_fabrics 00:20:08.149 rmmod nvme_keyring 00:20:08.149 11:47:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.149 11:47:41 -- nvmf/common.sh@123 -- # set -e 00:20:08.149 11:47:41 -- nvmf/common.sh@124 -- # return 0 00:20:08.149 11:47:41 -- nvmf/common.sh@477 -- # '[' -n 75060 ']' 00:20:08.149 11:47:41 -- nvmf/common.sh@478 -- # killprocess 75060 00:20:08.149 11:47:41 -- common/autotest_common.sh@936 -- # '[' -z 75060 ']' 00:20:08.149 11:47:41 -- common/autotest_common.sh@940 -- # kill -0 75060 00:20:08.149 11:47:41 -- common/autotest_common.sh@941 -- # uname 00:20:08.149 11:47:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.149 11:47:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75060 00:20:08.409 killing process with pid 75060 00:20:08.409 11:47:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.409 11:47:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.409 11:47:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75060' 00:20:08.409 11:47:41 -- common/autotest_common.sh@955 -- # kill 75060 00:20:08.409 11:47:41 -- common/autotest_common.sh@960 -- # wait 75060 00:20:08.667 11:47:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.667 11:47:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.667 11:47:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.667 11:47:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.667 11:47:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.667 11:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.667 11:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.667 11:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.667 11:47:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:08.667 00:20:08.667 real 0m20.203s 00:20:08.667 user 1m18.360s 00:20:08.667 sys 0m7.074s 00:20:08.667 11:47:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.667 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:20:08.667 ************************************ 00:20:08.667 END TEST nvmf_multipath 00:20:08.667 ************************************ 00:20:08.667 11:47:41 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:08.667 11:47:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.667 11:47:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.667 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:20:08.667 ************************************ 00:20:08.667 START TEST nvmf_zcopy 00:20:08.667 ************************************ 00:20:08.667 11:47:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:08.667 * Looking for test storage... 00:20:08.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:08.952 11:47:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.952 11:47:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.952 11:47:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.952 11:47:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.952 11:47:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.952 11:47:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.952 11:47:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.952 11:47:41 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.952 11:47:41 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.952 11:47:41 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.952 11:47:41 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.952 11:47:41 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.952 11:47:41 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.952 11:47:41 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.952 11:47:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.952 11:47:41 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.952 11:47:41 -- scripts/common.sh@344 -- # : 1 00:20:08.952 11:47:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.952 11:47:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.952 11:47:41 -- scripts/common.sh@364 -- # decimal 1 00:20:08.952 11:47:41 -- scripts/common.sh@352 -- # local d=1 00:20:08.952 11:47:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.952 11:47:41 -- scripts/common.sh@354 -- # echo 1 00:20:08.952 11:47:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.952 11:47:41 -- scripts/common.sh@365 -- # decimal 2 00:20:08.952 11:47:41 -- scripts/common.sh@352 -- # local d=2 00:20:08.952 11:47:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.952 11:47:41 -- scripts/common.sh@354 -- # echo 2 00:20:08.952 11:47:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.952 11:47:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.952 11:47:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.952 11:47:41 -- scripts/common.sh@367 -- # return 0 00:20:08.952 11:47:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.952 11:47:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.952 --rc genhtml_branch_coverage=1 00:20:08.952 --rc genhtml_function_coverage=1 00:20:08.952 --rc genhtml_legend=1 00:20:08.952 --rc geninfo_all_blocks=1 00:20:08.952 --rc geninfo_unexecuted_blocks=1 00:20:08.952 00:20:08.952 ' 00:20:08.952 11:47:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.952 --rc genhtml_branch_coverage=1 00:20:08.952 --rc genhtml_function_coverage=1 00:20:08.952 --rc genhtml_legend=1 00:20:08.952 --rc geninfo_all_blocks=1 00:20:08.952 --rc geninfo_unexecuted_blocks=1 00:20:08.952 00:20:08.952 ' 00:20:08.952 11:47:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.952 --rc genhtml_branch_coverage=1 00:20:08.952 --rc genhtml_function_coverage=1 00:20:08.952 --rc genhtml_legend=1 00:20:08.952 --rc geninfo_all_blocks=1 00:20:08.952 --rc geninfo_unexecuted_blocks=1 00:20:08.952 00:20:08.952 ' 00:20:08.952 11:47:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.952 --rc genhtml_branch_coverage=1 00:20:08.952 --rc genhtml_function_coverage=1 00:20:08.952 --rc genhtml_legend=1 00:20:08.952 --rc geninfo_all_blocks=1 00:20:08.952 --rc geninfo_unexecuted_blocks=1 00:20:08.952 00:20:08.952 ' 00:20:08.952 11:47:41 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.952 11:47:41 -- nvmf/common.sh@7 -- # uname -s 00:20:08.952 11:47:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.952 11:47:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.952 11:47:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.952 11:47:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.952 11:47:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.952 11:47:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.953 11:47:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.953 11:47:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.953 11:47:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.953 11:47:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.953 11:47:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:20:08.953 11:47:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:20:08.953 11:47:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.953 11:47:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.953 11:47:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.953 11:47:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.953 11:47:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.953 11:47:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.953 11:47:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.953 11:47:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.953 11:47:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.953 11:47:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.953 11:47:41 -- paths/export.sh@5 -- # export PATH 00:20:08.953 11:47:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.953 11:47:41 -- nvmf/common.sh@46 -- # : 0 00:20:08.953 11:47:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.953 11:47:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.953 11:47:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.953 11:47:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.953 11:47:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.953 11:47:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.953 11:47:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.953 11:47:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.953 11:47:41 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:08.953 11:47:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:08.953 11:47:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.953 11:47:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.953 11:47:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.953 11:47:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.953 11:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.953 11:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.953 11:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.953 11:47:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:08.953 11:47:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:08.953 11:47:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:08.953 11:47:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:08.953 11:47:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:08.953 11:47:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:08.953 11:47:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.953 11:47:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.953 11:47:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:08.953 11:47:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:08.953 11:47:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.953 11:47:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.953 11:47:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.953 11:47:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.953 11:47:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.953 11:47:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.953 11:47:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.953 11:47:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.953 11:47:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:08.953 11:47:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:08.953 Cannot find device "nvmf_tgt_br" 00:20:08.953 11:47:41 -- nvmf/common.sh@154 -- # true 00:20:08.953 11:47:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.953 Cannot find device "nvmf_tgt_br2" 00:20:08.953 11:47:41 -- nvmf/common.sh@155 -- # true 00:20:08.953 11:47:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:08.953 11:47:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:08.953 Cannot find device "nvmf_tgt_br" 00:20:08.953 11:47:41 -- nvmf/common.sh@157 -- # true 00:20:08.953 11:47:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:08.953 Cannot find device "nvmf_tgt_br2" 00:20:08.953 11:47:41 -- nvmf/common.sh@158 -- # true 00:20:08.953 11:47:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:09.243 11:47:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:09.244 11:47:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:09.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.244 11:47:42 -- nvmf/common.sh@161 -- # true 00:20:09.244 11:47:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:09.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.244 11:47:42 -- nvmf/common.sh@162 -- # true 00:20:09.244 11:47:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:09.244 11:47:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:09.244 11:47:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:09.244 11:47:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:09.244 11:47:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:09.244 11:47:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:09.244 11:47:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:09.244 11:47:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:09.244 11:47:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:09.244 11:47:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:09.244 11:47:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:09.244 11:47:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:09.244 11:47:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:09.244 11:47:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:09.244 11:47:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:09.244 11:47:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:09.244 11:47:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:09.244 11:47:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:09.244 11:47:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:09.244 11:47:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:09.244 11:47:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:09.244 11:47:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:09.244 11:47:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:09.244 11:47:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:09.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:20:09.244 00:20:09.244 --- 10.0.0.2 ping statistics --- 00:20:09.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.244 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:09.244 11:47:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:09.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:09.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.191 ms 00:20:09.244 00:20:09.244 --- 10.0.0.3 ping statistics --- 00:20:09.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.244 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:09.244 11:47:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:09.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:20:09.244 00:20:09.244 --- 10.0.0.1 ping statistics --- 00:20:09.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.244 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:09.244 11:47:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.244 11:47:42 -- nvmf/common.sh@421 -- # return 0 00:20:09.244 11:47:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:09.244 11:47:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.244 11:47:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:09.244 11:47:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:09.244 11:47:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.244 11:47:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:09.244 11:47:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:09.244 11:47:42 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:09.244 11:47:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:09.244 11:47:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.244 11:47:42 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 11:47:42 -- nvmf/common.sh@469 -- # nvmfpid=75662 00:20:09.503 11:47:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:09.503 11:47:42 -- nvmf/common.sh@470 -- # waitforlisten 75662 00:20:09.503 11:47:42 -- common/autotest_common.sh@829 -- # '[' -z 75662 ']' 00:20:09.503 11:47:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.503 11:47:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.503 11:47:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.503 11:47:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.503 11:47:42 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 [2024-11-20 11:47:42.338388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:09.503 [2024-11-20 11:47:42.338459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.503 [2024-11-20 11:47:42.476632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.762 [2024-11-20 11:47:42.617439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.762 [2024-11-20 11:47:42.617559] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.762 [2024-11-20 11:47:42.617567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.762 [2024-11-20 11:47:42.617572] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.762 [2024-11-20 11:47:42.617599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.329 11:47:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.329 11:47:43 -- common/autotest_common.sh@862 -- # return 0 00:20:10.329 11:47:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:10.329 11:47:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 11:47:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.329 11:47:43 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:10.329 11:47:43 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:10.329 11:47:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 [2024-11-20 11:47:43.258087] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.329 11:47:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.329 11:47:43 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:10.329 11:47:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 11:47:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.329 11:47:43 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.329 11:47:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 [2024-11-20 11:47:43.282177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.329 11:47:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.329 11:47:43 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:10.329 11:47:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 11:47:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.329 11:47:43 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:10.329 11:47:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 malloc0 00:20:10.329 11:47:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.329 11:47:43 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.329 11:47:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.329 11:47:43 -- common/autotest_common.sh@10 -- # set +x 00:20:10.330 11:47:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.330 11:47:43 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:10.330 11:47:43 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:10.330 11:47:43 -- nvmf/common.sh@520 -- # config=() 00:20:10.330 11:47:43 -- nvmf/common.sh@520 -- # local subsystem config 00:20:10.330 11:47:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:10.330 11:47:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:10.330 { 00:20:10.330 "params": { 00:20:10.330 "name": "Nvme$subsystem", 00:20:10.330 "trtype": "$TEST_TRANSPORT", 00:20:10.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.330 "adrfam": "ipv4", 00:20:10.330 "trsvcid": "$NVMF_PORT", 00:20:10.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.330 "hdgst": ${hdgst:-false}, 00:20:10.330 "ddgst": ${ddgst:-false} 00:20:10.330 }, 00:20:10.330 "method": "bdev_nvme_attach_controller" 00:20:10.330 } 00:20:10.330 EOF 00:20:10.330 )") 00:20:10.330 11:47:43 -- nvmf/common.sh@542 -- # cat 00:20:10.330 11:47:43 -- nvmf/common.sh@544 -- # jq . 00:20:10.330 11:47:43 -- nvmf/common.sh@545 -- # IFS=, 00:20:10.330 11:47:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:10.330 "params": { 00:20:10.330 "name": "Nvme1", 00:20:10.330 "trtype": "tcp", 00:20:10.330 "traddr": "10.0.0.2", 00:20:10.330 "adrfam": "ipv4", 00:20:10.330 "trsvcid": "4420", 00:20:10.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.330 "hdgst": false, 00:20:10.330 "ddgst": false 00:20:10.330 }, 00:20:10.330 "method": "bdev_nvme_attach_controller" 00:20:10.330 }' 00:20:10.589 [2024-11-20 11:47:43.388014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:10.589 [2024-11-20 11:47:43.388088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75714 ] 00:20:10.589 [2024-11-20 11:47:43.525488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.589 [2024-11-20 11:47:43.616699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.848 Running I/O for 10 seconds... 00:20:20.829 00:20:20.829 Latency(us) 00:20:20.829 [2024-11-20T11:47:53.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.829 [2024-11-20T11:47:53.872Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:20.829 Verification LBA range: start 0x0 length 0x1000 00:20:20.829 Nvme1n1 : 10.01 12506.79 97.71 0.00 0.00 10211.87 787.00 18773.63 00:20:20.829 [2024-11-20T11:47:53.873Z] =================================================================================================================== 00:20:20.830 [2024-11-20T11:47:53.873Z] Total : 12506.79 97.71 0.00 0.00 10211.87 787.00 18773.63 00:20:21.090 11:47:53 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:21.090 11:47:53 -- target/zcopy.sh@39 -- # perfpid=75837 00:20:21.090 11:47:53 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:21.090 11:47:53 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:21.090 11:47:53 -- common/autotest_common.sh@10 -- # set +x 00:20:21.090 11:47:53 -- nvmf/common.sh@520 -- # config=() 00:20:21.090 11:47:53 -- nvmf/common.sh@520 -- # local subsystem config 00:20:21.090 11:47:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:21.090 11:47:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:21.090 { 00:20:21.090 "params": { 00:20:21.090 "name": "Nvme$subsystem", 00:20:21.090 "trtype": "$TEST_TRANSPORT", 00:20:21.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.090 "adrfam": "ipv4", 00:20:21.090 "trsvcid": "$NVMF_PORT", 00:20:21.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.090 "hdgst": ${hdgst:-false}, 00:20:21.090 "ddgst": ${ddgst:-false} 00:20:21.090 }, 00:20:21.090 "method": "bdev_nvme_attach_controller" 00:20:21.090 } 00:20:21.090 EOF 00:20:21.090 )") 00:20:21.090 11:47:53 -- nvmf/common.sh@542 -- # cat 00:20:21.090 [2024-11-20 11:47:53.990643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.090 [2024-11-20 11:47:53.990704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.090 11:47:53 -- nvmf/common.sh@544 -- # jq . 00:20:21.090 2024/11/20 11:47:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.090 11:47:53 -- nvmf/common.sh@545 -- # IFS=, 00:20:21.090 11:47:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:21.090 "params": { 00:20:21.090 "name": "Nvme1", 00:20:21.090 "trtype": "tcp", 00:20:21.090 "traddr": "10.0.0.2", 00:20:21.090 "adrfam": "ipv4", 00:20:21.090 "trsvcid": "4420", 00:20:21.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.090 "hdgst": false, 00:20:21.090 "ddgst": false 00:20:21.090 }, 00:20:21.090 "method": "bdev_nvme_attach_controller" 00:20:21.090 }' 00:20:21.090 [2024-11-20 11:47:54.002555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.090 [2024-11-20 11:47:54.002577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.090 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.090 [2024-11-20 11:47:54.009165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.090 [2024-11-20 11:47:54.009217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75837 ] 00:20:21.090 [2024-11-20 11:47:54.014544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.090 [2024-11-20 11:47:54.014564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.090 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.090 [2024-11-20 11:47:54.026512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.090 [2024-11-20 11:47:54.026533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.090 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.090 [2024-11-20 11:47:54.038497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.090 [2024-11-20 11:47:54.038514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.090 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.090 [2024-11-20 11:47:54.050463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.090 [2024-11-20 11:47:54.050483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.090 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.091 [2024-11-20 11:47:54.062441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.091 [2024-11-20 11:47:54.062461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.091 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.091 [2024-11-20 11:47:54.074431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.091 [2024-11-20 11:47:54.074450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.091 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.091 [2024-11-20 11:47:54.086420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.091 [2024-11-20 11:47:54.086441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.091 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.091 [2024-11-20 11:47:54.098394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.091 [2024-11-20 11:47:54.098418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.091 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.091 [2024-11-20 11:47:54.110374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.091 [2024-11-20 11:47:54.110394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.091 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.091 [2024-11-20 11:47:54.122349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.091 [2024-11-20 11:47:54.122366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.091 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.351 [2024-11-20 11:47:54.134316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.351 [2024-11-20 11:47:54.134334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.351 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.351 [2024-11-20 11:47:54.145284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.351 [2024-11-20 11:47:54.146297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.351 [2024-11-20 11:47:54.146315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.351 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.351 [2024-11-20 11:47:54.158288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.351 [2024-11-20 11:47:54.158306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.351 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.351 [2024-11-20 11:47:54.170253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.351 [2024-11-20 11:47:54.170271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.182230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.182248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.194257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.194280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.206203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.206222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.218201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.218221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.230163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.230180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.235300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.352 [2024-11-20 11:47:54.242143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.242161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.254124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.254143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.266103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.266121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.278099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.278122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.290065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.290083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.302041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.302059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.314018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.314035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.326023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.326050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.337981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.338005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.349960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.349983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.361938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.361962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.373942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.373969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 [2024-11-20 11:47:54.385921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.352 [2024-11-20 11:47:54.385948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.352 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.352 Running I/O for 5 seconds... 00:20:21.613 [2024-11-20 11:47:54.397880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.397899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.413543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.413571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.428079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.428106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.438829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.438855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.453468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.453494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.464073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.464100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.478413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.478439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.491548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.491575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.505316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.505343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.518928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.518970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.532496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.532525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.546981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.547007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.562304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.562332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.576335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.576395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.589689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.589732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.603305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.603334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.617026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.617053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.613 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.613 [2024-11-20 11:47:54.630903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.613 [2024-11-20 11:47:54.630929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.614 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.614 [2024-11-20 11:47:54.644306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.614 [2024-11-20 11:47:54.644334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.614 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.658362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.658389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.672022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.672050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.685217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.685243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.699135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.699162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.712915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.712944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.726258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.726297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.739730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.739762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.753341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.753367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.766909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.766935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.873 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.873 [2024-11-20 11:47:54.780584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.873 [2024-11-20 11:47:54.780610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.793997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.794023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.807738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.807763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.821647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.821684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.835885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.835911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.847352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.847383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.861043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.861069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.874831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.874858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.888544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.888572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.874 [2024-11-20 11:47:54.902277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.874 [2024-11-20 11:47:54.902304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.874 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.916170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.916198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.930021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.930048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.943751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.943776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.957109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.957135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.970683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.970710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.983721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.983755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:54.997520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:54.997553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.011226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.011254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.025205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.025233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.038597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.038625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.052582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.052610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.063574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.063602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.077696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.077724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.091382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.091415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.102583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.134 [2024-11-20 11:47:55.102612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.134 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.134 [2024-11-20 11:47:55.117268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.135 [2024-11-20 11:47:55.117295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.135 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.135 [2024-11-20 11:47:55.130765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.135 [2024-11-20 11:47:55.130790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.135 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.135 [2024-11-20 11:47:55.144119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.135 [2024-11-20 11:47:55.144147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.135 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.135 [2024-11-20 11:47:55.157731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.135 [2024-11-20 11:47:55.157757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.135 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.135 [2024-11-20 11:47:55.171119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.135 [2024-11-20 11:47:55.171146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.135 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.185303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.185329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.199475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.199502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.214891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.214917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.229097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.229122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.239541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.239568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.253356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.253381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.266286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.266312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.280510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.395 [2024-11-20 11:47:55.280541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.395 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.395 [2024-11-20 11:47:55.294380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.294408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.307932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.307959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.321772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.321798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.333208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.333236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.346684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.346713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.360163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.360189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.374158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.374186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.387839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.387865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.401285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.401310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.414810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.414835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.396 [2024-11-20 11:47:55.428357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.396 [2024-11-20 11:47:55.428383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.396 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.442306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.442334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.455793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.455819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.469673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.469708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.484051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.484078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.494934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.494960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.509487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.509515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.520455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.520487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.535898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.535927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.551714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.551741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.562951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.562979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.578492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.578520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.593434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.593463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.607016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.607044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.620866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.620896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.635043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.635074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.646043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.646080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.660677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.660715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.656 [2024-11-20 11:47:55.673911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.656 [2024-11-20 11:47:55.673940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.656 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.657 [2024-11-20 11:47:55.687791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.657 [2024-11-20 11:47:55.687818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.657 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.916 [2024-11-20 11:47:55.701679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.916 [2024-11-20 11:47:55.701705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.916 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.916 [2024-11-20 11:47:55.715505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.916 [2024-11-20 11:47:55.715534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.916 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.916 [2024-11-20 11:47:55.729224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.916 [2024-11-20 11:47:55.729250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.916 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.742586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.742612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.756385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.756413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.769867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.769893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.783748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.783778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.797362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.797392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.811177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.811204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.824859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.824885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.837814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.837840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.851607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.851636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.865204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.865233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.878803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.878829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.892904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.892930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.908725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.908752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.922759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.922784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.936607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.936635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.917 [2024-11-20 11:47:55.950054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.917 [2024-11-20 11:47:55.950079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.917 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:55.964162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:55.964190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:55.977881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:55.977909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:55.991582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:55.991608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.004991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.005020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.018760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.018798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.032089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.032119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.046218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.046246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.061865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.061892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.076520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.076551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.087377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.087412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.101910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.177 [2024-11-20 11:47:56.101952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.177 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.177 [2024-11-20 11:47:56.115015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.115053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.128942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.128969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.142366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.142393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.155610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.155637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.169150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.169177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.182716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.182742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.195937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.195963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.178 [2024-11-20 11:47:56.209821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.178 [2024-11-20 11:47:56.209848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.178 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.221173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.221200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.235001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.235028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.248780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.248814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.262815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.262849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.273525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.273554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.287784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.287811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.301241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.301268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.314906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.314935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.328308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.328337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.343135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.343162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.353929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.353956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.368037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.368067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.381771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.381798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.437 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.437 [2024-11-20 11:47:56.395775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.437 [2024-11-20 11:47:56.395808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.438 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.438 [2024-11-20 11:47:56.409564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.438 [2024-11-20 11:47:56.409591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.438 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.438 [2024-11-20 11:47:56.423312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.438 [2024-11-20 11:47:56.423348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.438 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.438 [2024-11-20 11:47:56.437290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.438 [2024-11-20 11:47:56.437317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.438 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.438 [2024-11-20 11:47:56.450999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.438 [2024-11-20 11:47:56.451027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.438 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.438 [2024-11-20 11:47:56.464823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.438 [2024-11-20 11:47:56.464850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.438 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.478233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.478260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.491920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.491949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.505377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.505404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.519475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.519503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.533273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.533301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.547150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.547176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.561229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.561255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.574840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.574865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.589325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.589354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.605484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.605515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.616458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.616488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.630810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.630921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.644806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.644917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.655575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.655608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.669933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.670029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.684292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.684377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.698597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.698696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.709987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.710071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.697 [2024-11-20 11:47:56.724652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.697 [2024-11-20 11:47:56.724748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.697 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.698 [2024-11-20 11:47:56.735634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.698 [2024-11-20 11:47:56.735718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.750008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.750037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.764028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.764127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.774777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.774804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.788508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.788606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.802298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.802378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.816368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.816468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.826859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.826939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.841082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.841157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.854814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.854895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.958 [2024-11-20 11:47:56.869004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.958 [2024-11-20 11:47:56.869085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.958 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.882868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.882938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.896533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.896632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.911173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.911243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.926687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.926775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.941574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.941604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.957326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.957419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.971768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.971855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.959 [2024-11-20 11:47:56.985554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.959 [2024-11-20 11:47:56.985644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.959 2024/11/20 11:47:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.000682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.000792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.015863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.015948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.030225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.030253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.043694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.043771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.057905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.057988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.071267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.071373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.085596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.085698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.101019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.101055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.115516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.115553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.126923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.127025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.141060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.141090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.154940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.155020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.165747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.165775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.180137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.180163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.193297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.193324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.207351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.207384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.221469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.221498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.220 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.220 [2024-11-20 11:47:57.231996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.220 [2024-11-20 11:47:57.232022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.221 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.221 [2024-11-20 11:47:57.245999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.221 [2024-11-20 11:47:57.246031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.221 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.221 [2024-11-20 11:47:57.259329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.221 [2024-11-20 11:47:57.259372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.481 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.481 [2024-11-20 11:47:57.273583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.481 [2024-11-20 11:47:57.273614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.284969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.284996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.299011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.299040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.312533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.312562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.326536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.326563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.337165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.337192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.351638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.351679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.362583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.362611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.377075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.377104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.391333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.391374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.406609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.406639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.420802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.420832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.434591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.434622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.448656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.448694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.461926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.461954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.475711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.475738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.489115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.489143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.502828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.502854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.482 [2024-11-20 11:47:57.516703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.482 [2024-11-20 11:47:57.516742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.482 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.759 [2024-11-20 11:47:57.530765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.759 [2024-11-20 11:47:57.530794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.759 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.759 [2024-11-20 11:47:57.544186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.759 [2024-11-20 11:47:57.544216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.759 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.759 [2024-11-20 11:47:57.558352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.759 [2024-11-20 11:47:57.558381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.759 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.759 [2024-11-20 11:47:57.572215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.759 [2024-11-20 11:47:57.572242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.759 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.759 [2024-11-20 11:47:57.586105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.759 [2024-11-20 11:47:57.586133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.759 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.759 [2024-11-20 11:47:57.600699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.600726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.611959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.611986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.626281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.626316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.641350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.641384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.657206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.657313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.671226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.671309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.685571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.685602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.699390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.699423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.713480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.713578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.727912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.727940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.739107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.739137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.753496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.753585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.767257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.767288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.760 [2024-11-20 11:47:57.781425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.760 [2024-11-20 11:47:57.781460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.760 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.792822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.792920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.807903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.808008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.823242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.823279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.837714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.837747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.848979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.849081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.862989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.863019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.876294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.876322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.890251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.890349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.904317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.904349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.918387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.918419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.932532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.932620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.943166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.943194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.957456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.957487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.971002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.971093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.985128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.985159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:57.999040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:57.999070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:58.013319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:58.013444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.032 [2024-11-20 11:47:58.028594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.032 [2024-11-20 11:47:58.028623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.032 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.033 [2024-11-20 11:47:58.042677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.033 [2024-11-20 11:47:58.042706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.033 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.033 [2024-11-20 11:47:58.056535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.033 [2024-11-20 11:47:58.056624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.033 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.033 [2024-11-20 11:47:58.070337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.033 [2024-11-20 11:47:58.070366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.084646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.084734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.099072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.099157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.113340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.113371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.126968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.126999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.141821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.141927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.157198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.157230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.171057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.171087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.184783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.184870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.198463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.198493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.212451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.212480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.226214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.226301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.240325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.240355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.251074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.251104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.265950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.292 [2024-11-20 11:47:58.266039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.292 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.292 [2024-11-20 11:47:58.281629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.293 [2024-11-20 11:47:58.281672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.293 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.293 [2024-11-20 11:47:58.296295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.293 [2024-11-20 11:47:58.296324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.293 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.293 [2024-11-20 11:47:58.307378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.293 [2024-11-20 11:47:58.307477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.293 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.293 [2024-11-20 11:47:58.321687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.293 [2024-11-20 11:47:58.321717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.293 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.335822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.335851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.346693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.346791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.361489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.361512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.375039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.375119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.388832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.388912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.402759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.402787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.416866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.416899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.430994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.431092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.445557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.445591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.552 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.552 [2024-11-20 11:47:58.461177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.552 [2024-11-20 11:47:58.461208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.475110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.475193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.488605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.488634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.502736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.502765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.513277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.513349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.527500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.527531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.538250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.538279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.552754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.552833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.566797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.566819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.553 [2024-11-20 11:47:58.580610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.553 [2024-11-20 11:47:58.580639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.553 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.594275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.594364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.608540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.608582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.619431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.619463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.633488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.633569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.647234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.647262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.661289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.661384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.675189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.675257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.689460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.689546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.705404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.705488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.720201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.720274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.731461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.731539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.745298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.745376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.758771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.758838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.772549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.772636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.786928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.786961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.800599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.800694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.814297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.814326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.828302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.828330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.812 [2024-11-20 11:47:58.839321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.812 [2024-11-20 11:47:58.839434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.812 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.854126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.854155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.869496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.869527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.884013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.884125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.895418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.895451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.909842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.909871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.923723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.923818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.938188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.938280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.952684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.072 [2024-11-20 11:47:58.952783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.072 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.072 [2024-11-20 11:47:58.963711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:58.963802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:58.978136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:58.978214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:58.991898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:58.991972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.005810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.005882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.019866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.019936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.033393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.033462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.047227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.047294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.061283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.061354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.075313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.075417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.089031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.089112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.073 [2024-11-20 11:47:59.103559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.073 [2024-11-20 11:47:59.103589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.073 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.347 [2024-11-20 11:47:59.118165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.348 [2024-11-20 11:47:59.118235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.348 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.348 [2024-11-20 11:47:59.133913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.348 [2024-11-20 11:47:59.133942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.348 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.348 [2024-11-20 11:47:59.148294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.348 [2024-11-20 11:47:59.148398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.348 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.348 [2024-11-20 11:47:59.159549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.348 [2024-11-20 11:47:59.159637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.348 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.348 [2024-11-20 11:47:59.174738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.348 [2024-11-20 11:47:59.174816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.348 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.348 [2024-11-20 11:47:59.190165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.348 [2024-11-20 11:47:59.190244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.348 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.348 [2024-11-20 11:47:59.204146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.204241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.218480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.218563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.229347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.229428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.244112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.244205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.255415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.255511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.269942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.269970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.283772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.283859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.298171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.298200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.309011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.309038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.323332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.323432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.336841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.336882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.350746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.350775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.364174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.364262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.349 [2024-11-20 11:47:59.378580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.349 [2024-11-20 11:47:59.378611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.349 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 00:20:26.609 Latency(us) 00:20:26.609 [2024-11-20T11:47:59.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.609 [2024-11-20T11:47:59.652Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:26.609 Nvme1n1 : 5.01 16777.76 131.08 0.00 0.00 7621.78 3419.89 18773.63 00:20:26.609 [2024-11-20T11:47:59.652Z] =================================================================================================================== 00:20:26.609 [2024-11-20T11:47:59.652Z] Total : 16777.76 131.08 0.00 0.00 7621.78 3419.89 18773.63 00:20:26.609 [2024-11-20 11:47:59.390509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.390536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.402482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.402570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.414453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.414474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.426426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.426446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.438408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.438468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.450393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.450413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.462375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.462396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.474351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.474416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.486331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.486353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.498321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.498344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.510292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.510352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.522266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.609 [2024-11-20 11:47:59.522317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.609 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.609 [2024-11-20 11:47:59.534245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.534265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 [2024-11-20 11:47:59.546223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.546241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 [2024-11-20 11:47:59.558217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.558270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 [2024-11-20 11:47:59.570187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.570207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 [2024-11-20 11:47:59.582166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.582186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 [2024-11-20 11:47:59.594148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.594205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 [2024-11-20 11:47:59.606122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.610 [2024-11-20 11:47:59.606143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.610 2024/11/20 11:47:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.610 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75837) - No such process 00:20:26.610 11:47:59 -- target/zcopy.sh@49 -- # wait 75837 00:20:26.610 11:47:59 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:26.610 11:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.610 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.610 11:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.610 11:47:59 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:26.610 11:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.610 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.610 delay0 00:20:26.610 11:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.610 11:47:59 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:26.610 11:47:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.610 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.610 11:47:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.610 11:47:59 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:26.869 [2024-11-20 11:47:59.826970] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:33.441 Initializing NVMe Controllers 00:20:33.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.441 Initialization complete. Launching workers. 00:20:33.441 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 749 00:20:33.441 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1035, failed to submit 34 00:20:33.441 success 842, unsuccess 193, failed 0 00:20:33.441 11:48:05 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:33.441 11:48:05 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:33.441 11:48:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:33.441 11:48:05 -- nvmf/common.sh@116 -- # sync 00:20:33.441 11:48:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:33.441 11:48:06 -- nvmf/common.sh@119 -- # set +e 00:20:33.441 11:48:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:33.441 11:48:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:33.441 rmmod nvme_tcp 00:20:33.441 rmmod nvme_fabrics 00:20:33.441 rmmod nvme_keyring 00:20:33.441 11:48:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:33.441 11:48:06 -- nvmf/common.sh@123 -- # set -e 00:20:33.441 11:48:06 -- nvmf/common.sh@124 -- # return 0 00:20:33.441 11:48:06 -- nvmf/common.sh@477 -- # '[' -n 75662 ']' 00:20:33.441 11:48:06 -- nvmf/common.sh@478 -- # killprocess 75662 00:20:33.441 11:48:06 -- common/autotest_common.sh@936 -- # '[' -z 75662 ']' 00:20:33.441 11:48:06 -- common/autotest_common.sh@940 -- # kill -0 75662 00:20:33.441 11:48:06 -- common/autotest_common.sh@941 -- # uname 00:20:33.441 11:48:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.441 11:48:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75662 00:20:33.441 killing process with pid 75662 00:20:33.441 11:48:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:33.441 11:48:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:33.441 11:48:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75662' 00:20:33.441 11:48:06 -- common/autotest_common.sh@955 -- # kill 75662 00:20:33.441 11:48:06 -- common/autotest_common.sh@960 -- # wait 75662 00:20:33.700 11:48:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:33.700 11:48:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:33.700 11:48:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:33.700 11:48:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.700 11:48:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:33.700 11:48:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.700 11:48:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.700 11:48:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.700 11:48:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:33.700 00:20:33.700 real 0m24.949s 00:20:33.700 user 0m38.998s 00:20:33.700 sys 0m7.314s 00:20:33.700 11:48:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:33.700 11:48:06 -- common/autotest_common.sh@10 -- # set +x 00:20:33.700 ************************************ 00:20:33.700 END TEST nvmf_zcopy 00:20:33.700 ************************************ 00:20:33.700 11:48:06 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:33.700 11:48:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:33.700 11:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.700 11:48:06 -- common/autotest_common.sh@10 -- # set +x 00:20:33.700 ************************************ 00:20:33.700 START TEST nvmf_nmic 00:20:33.700 ************************************ 00:20:33.700 11:48:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:33.700 * Looking for test storage... 00:20:33.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:33.700 11:48:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:33.700 11:48:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:33.700 11:48:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:33.960 11:48:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:33.960 11:48:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:33.960 11:48:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:33.960 11:48:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:33.960 11:48:06 -- scripts/common.sh@335 -- # IFS=.-: 00:20:33.960 11:48:06 -- scripts/common.sh@335 -- # read -ra ver1 00:20:33.960 11:48:06 -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.960 11:48:06 -- scripts/common.sh@336 -- # read -ra ver2 00:20:33.960 11:48:06 -- scripts/common.sh@337 -- # local 'op=<' 00:20:33.960 11:48:06 -- scripts/common.sh@339 -- # ver1_l=2 00:20:33.960 11:48:06 -- scripts/common.sh@340 -- # ver2_l=1 00:20:33.960 11:48:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:33.960 11:48:06 -- scripts/common.sh@343 -- # case "$op" in 00:20:33.960 11:48:06 -- scripts/common.sh@344 -- # : 1 00:20:33.960 11:48:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:33.960 11:48:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.960 11:48:06 -- scripts/common.sh@364 -- # decimal 1 00:20:33.960 11:48:06 -- scripts/common.sh@352 -- # local d=1 00:20:33.960 11:48:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.960 11:48:06 -- scripts/common.sh@354 -- # echo 1 00:20:33.960 11:48:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:33.960 11:48:06 -- scripts/common.sh@365 -- # decimal 2 00:20:33.960 11:48:06 -- scripts/common.sh@352 -- # local d=2 00:20:33.960 11:48:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.960 11:48:06 -- scripts/common.sh@354 -- # echo 2 00:20:33.960 11:48:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:33.960 11:48:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:33.960 11:48:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:33.960 11:48:06 -- scripts/common.sh@367 -- # return 0 00:20:33.960 11:48:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.960 11:48:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:33.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.960 --rc genhtml_branch_coverage=1 00:20:33.960 --rc genhtml_function_coverage=1 00:20:33.960 --rc genhtml_legend=1 00:20:33.960 --rc geninfo_all_blocks=1 00:20:33.960 --rc geninfo_unexecuted_blocks=1 00:20:33.960 00:20:33.960 ' 00:20:33.960 11:48:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:33.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.960 --rc genhtml_branch_coverage=1 00:20:33.960 --rc genhtml_function_coverage=1 00:20:33.960 --rc genhtml_legend=1 00:20:33.960 --rc geninfo_all_blocks=1 00:20:33.960 --rc geninfo_unexecuted_blocks=1 00:20:33.960 00:20:33.960 ' 00:20:33.960 11:48:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:33.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.960 --rc genhtml_branch_coverage=1 00:20:33.960 --rc genhtml_function_coverage=1 00:20:33.960 --rc genhtml_legend=1 00:20:33.960 --rc geninfo_all_blocks=1 00:20:33.960 --rc geninfo_unexecuted_blocks=1 00:20:33.960 00:20:33.960 ' 00:20:33.960 11:48:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:33.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.960 --rc genhtml_branch_coverage=1 00:20:33.960 --rc genhtml_function_coverage=1 00:20:33.960 --rc genhtml_legend=1 00:20:33.960 --rc geninfo_all_blocks=1 00:20:33.960 --rc geninfo_unexecuted_blocks=1 00:20:33.960 00:20:33.960 ' 00:20:33.960 11:48:06 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.960 11:48:06 -- nvmf/common.sh@7 -- # uname -s 00:20:33.960 11:48:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.960 11:48:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.960 11:48:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.960 11:48:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.960 11:48:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.960 11:48:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.960 11:48:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.960 11:48:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.960 11:48:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.960 11:48:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.960 11:48:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:20:33.960 11:48:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:20:33.960 11:48:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.960 11:48:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.960 11:48:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.960 11:48:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.960 11:48:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.960 11:48:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.960 11:48:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.960 11:48:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.960 11:48:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.960 11:48:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.960 11:48:06 -- paths/export.sh@5 -- # export PATH 00:20:33.961 11:48:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.961 11:48:06 -- nvmf/common.sh@46 -- # : 0 00:20:33.961 11:48:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:33.961 11:48:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:33.961 11:48:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:33.961 11:48:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.961 11:48:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.961 11:48:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:33.961 11:48:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:33.961 11:48:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:33.961 11:48:06 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.961 11:48:06 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.961 11:48:06 -- target/nmic.sh@14 -- # nvmftestinit 00:20:33.961 11:48:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:33.961 11:48:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.961 11:48:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:33.961 11:48:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:33.961 11:48:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:33.961 11:48:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.961 11:48:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.961 11:48:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.961 11:48:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:33.961 11:48:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:33.961 11:48:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:33.961 11:48:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:33.961 11:48:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:33.961 11:48:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:33.961 11:48:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.961 11:48:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.961 11:48:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:33.961 11:48:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:33.961 11:48:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.961 11:48:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.961 11:48:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.961 11:48:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.961 11:48:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.961 11:48:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.961 11:48:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.961 11:48:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.961 11:48:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:33.961 11:48:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:33.961 Cannot find device "nvmf_tgt_br" 00:20:33.961 11:48:06 -- nvmf/common.sh@154 -- # true 00:20:33.961 11:48:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.961 Cannot find device "nvmf_tgt_br2" 00:20:33.961 11:48:06 -- nvmf/common.sh@155 -- # true 00:20:33.961 11:48:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:33.961 11:48:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:33.961 Cannot find device "nvmf_tgt_br" 00:20:33.961 11:48:06 -- nvmf/common.sh@157 -- # true 00:20:33.961 11:48:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:33.961 Cannot find device "nvmf_tgt_br2" 00:20:33.961 11:48:06 -- nvmf/common.sh@158 -- # true 00:20:33.961 11:48:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:34.221 11:48:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:34.221 11:48:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.221 11:48:07 -- nvmf/common.sh@161 -- # true 00:20:34.221 11:48:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.221 11:48:07 -- nvmf/common.sh@162 -- # true 00:20:34.221 11:48:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.221 11:48:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.221 11:48:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.221 11:48:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.221 11:48:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.221 11:48:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.221 11:48:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.221 11:48:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.221 11:48:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.221 11:48:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:34.221 11:48:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:34.221 11:48:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:34.221 11:48:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:34.221 11:48:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.221 11:48:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.221 11:48:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.221 11:48:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:34.221 11:48:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:34.221 11:48:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.221 11:48:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.221 11:48:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.221 11:48:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.221 11:48:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.221 11:48:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:34.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:20:34.221 00:20:34.221 --- 10.0.0.2 ping statistics --- 00:20:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.221 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:34.221 11:48:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:34.221 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.221 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:20:34.221 00:20:34.221 --- 10.0.0.3 ping statistics --- 00:20:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.221 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:34.221 11:48:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:34.221 00:20:34.221 --- 10.0.0.1 ping statistics --- 00:20:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.221 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:34.221 11:48:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.221 11:48:07 -- nvmf/common.sh@421 -- # return 0 00:20:34.221 11:48:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:34.221 11:48:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.221 11:48:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:34.221 11:48:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:34.221 11:48:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.221 11:48:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:34.221 11:48:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:34.221 11:48:07 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:34.221 11:48:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:34.221 11:48:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.221 11:48:07 -- common/autotest_common.sh@10 -- # set +x 00:20:34.221 11:48:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:34.221 11:48:07 -- nvmf/common.sh@469 -- # nvmfpid=76173 00:20:34.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.221 11:48:07 -- nvmf/common.sh@470 -- # waitforlisten 76173 00:20:34.221 11:48:07 -- common/autotest_common.sh@829 -- # '[' -z 76173 ']' 00:20:34.221 11:48:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.221 11:48:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.221 11:48:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.221 11:48:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.221 11:48:07 -- common/autotest_common.sh@10 -- # set +x 00:20:34.481 [2024-11-20 11:48:07.291496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:34.481 [2024-11-20 11:48:07.291561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.481 [2024-11-20 11:48:07.428386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.481 [2024-11-20 11:48:07.511452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:34.481 [2024-11-20 11:48:07.511594] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.481 [2024-11-20 11:48:07.511602] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.481 [2024-11-20 11:48:07.511607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.481 [2024-11-20 11:48:07.511823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.481 [2024-11-20 11:48:07.512874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.481 [2024-11-20 11:48:07.512981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.481 [2024-11-20 11:48:07.512985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.420 11:48:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.420 11:48:08 -- common/autotest_common.sh@862 -- # return 0 00:20:35.420 11:48:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:35.420 11:48:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 11:48:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.420 11:48:08 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 [2024-11-20 11:48:08.189460] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 Malloc0 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 [2024-11-20 11:48:08.248543] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.420 test case1: single bdev can't be used in multiple subsystems 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:35.420 11:48:08 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@28 -- # nmic_status=0 00:20:35.420 11:48:08 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 [2024-11-20 11:48:08.284409] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:35.420 [2024-11-20 11:48:08.284434] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:35.420 [2024-11-20 11:48:08.284441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:35.420 2024/11/20 11:48:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:35.420 request: 00:20:35.420 { 00:20:35.420 "method": "nvmf_subsystem_add_ns", 00:20:35.420 "params": { 00:20:35.420 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.420 "namespace": { 00:20:35.420 "bdev_name": "Malloc0" 00:20:35.420 } 00:20:35.420 } 00:20:35.420 } 00:20:35.420 Got JSON-RPC error response 00:20:35.420 GoRPCClient: error on JSON-RPC call 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@29 -- # nmic_status=1 00:20:35.420 11:48:08 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:35.420 11:48:08 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:35.420 Adding namespace failed - expected result. 00:20:35.420 11:48:08 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:35.420 test case2: host connect to nvmf target in multiple paths 00:20:35.420 11:48:08 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:35.420 11:48:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.420 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 [2024-11-20 11:48:08.304512] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:35.420 11:48:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.420 11:48:08 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:35.680 11:48:08 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:35.680 11:48:08 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:35.680 11:48:08 -- common/autotest_common.sh@1187 -- # local i=0 00:20:35.680 11:48:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:35.680 11:48:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:35.680 11:48:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:38.221 11:48:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:38.221 11:48:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:38.221 11:48:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:38.221 11:48:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:38.221 11:48:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.221 11:48:10 -- common/autotest_common.sh@1197 -- # return 0 00:20:38.221 11:48:10 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:38.221 [global] 00:20:38.221 thread=1 00:20:38.221 invalidate=1 00:20:38.221 rw=write 00:20:38.221 time_based=1 00:20:38.221 runtime=1 00:20:38.221 ioengine=libaio 00:20:38.221 direct=1 00:20:38.221 bs=4096 00:20:38.221 iodepth=1 00:20:38.221 norandommap=0 00:20:38.221 numjobs=1 00:20:38.221 00:20:38.221 verify_dump=1 00:20:38.221 verify_backlog=512 00:20:38.221 verify_state_save=0 00:20:38.221 do_verify=1 00:20:38.221 verify=crc32c-intel 00:20:38.221 [job0] 00:20:38.221 filename=/dev/nvme0n1 00:20:38.221 Could not set queue depth (nvme0n1) 00:20:38.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:38.221 fio-3.35 00:20:38.221 Starting 1 thread 00:20:39.160 00:20:39.160 job0: (groupid=0, jobs=1): err= 0: pid=76278: Wed Nov 20 11:48:11 2024 00:20:39.160 read: IOPS=4334, BW=16.9MiB/s (17.8MB/s)(16.9MiB/1001msec) 00:20:39.160 slat (nsec): min=6486, max=26635, avg=7342.37, stdev=1220.11 00:20:39.160 clat (usec): min=90, max=465, avg=118.66, stdev=11.73 00:20:39.160 lat (usec): min=97, max=471, avg=126.00, stdev=11.92 00:20:39.160 clat percentiles (usec): 00:20:39.160 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 111], 00:20:39.160 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 121], 00:20:39.160 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 139], 00:20:39.160 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 172], 99.95th=[ 176], 00:20:39.160 | 99.99th=[ 465] 00:20:39.160 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:20:39.160 slat (usec): min=9, max=121, avg=12.12, stdev= 6.22 00:20:39.160 clat (usec): min=61, max=210, avg=84.55, stdev=10.22 00:20:39.160 lat (usec): min=71, max=331, avg=96.67, stdev=13.72 00:20:39.160 clat percentiles (usec): 00:20:39.160 | 1.00th=[ 68], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 77], 00:20:39.160 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 86], 00:20:39.160 | 70.00th=[ 89], 80.00th=[ 92], 90.00th=[ 98], 95.00th=[ 103], 00:20:39.160 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 137], 99.95th=[ 139], 00:20:39.160 | 99.99th=[ 210] 00:20:39.160 bw ( KiB/s): min=19376, max=19376, per=100.00%, avg=19376.00, stdev= 0.00, samples=1 00:20:39.160 iops : min= 4844, max= 4844, avg=4844.00, stdev= 0.00, samples=1 00:20:39.160 lat (usec) : 100=48.72%, 250=51.27%, 500=0.01% 00:20:39.160 cpu : usr=1.00%, sys=7.10%, ctx=8948, majf=0, minf=5 00:20:39.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:39.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.160 issued rwts: total=4339,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:39.160 00:20:39.160 Run status group 0 (all jobs): 00:20:39.160 READ: bw=16.9MiB/s (17.8MB/s), 16.9MiB/s-16.9MiB/s (17.8MB/s-17.8MB/s), io=16.9MiB (17.8MB), run=1001-1001msec 00:20:39.160 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:20:39.160 00:20:39.160 Disk stats (read/write): 00:20:39.160 nvme0n1: ios=3970/4096, merge=0/0, ticks=498/366, in_queue=864, util=91.18% 00:20:39.160 11:48:12 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:39.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:39.160 11:48:12 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:39.160 11:48:12 -- common/autotest_common.sh@1208 -- # local i=0 00:20:39.160 11:48:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:39.160 11:48:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.160 11:48:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:39.160 11:48:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.160 11:48:12 -- common/autotest_common.sh@1220 -- # return 0 00:20:39.160 11:48:12 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:39.160 11:48:12 -- target/nmic.sh@53 -- # nvmftestfini 00:20:39.160 11:48:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:39.160 11:48:12 -- nvmf/common.sh@116 -- # sync 00:20:39.160 11:48:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:39.160 11:48:12 -- nvmf/common.sh@119 -- # set +e 00:20:39.160 11:48:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:39.160 11:48:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:39.160 rmmod nvme_tcp 00:20:39.419 rmmod nvme_fabrics 00:20:39.419 rmmod nvme_keyring 00:20:39.419 11:48:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:39.419 11:48:12 -- nvmf/common.sh@123 -- # set -e 00:20:39.419 11:48:12 -- nvmf/common.sh@124 -- # return 0 00:20:39.419 11:48:12 -- nvmf/common.sh@477 -- # '[' -n 76173 ']' 00:20:39.419 11:48:12 -- nvmf/common.sh@478 -- # killprocess 76173 00:20:39.419 11:48:12 -- common/autotest_common.sh@936 -- # '[' -z 76173 ']' 00:20:39.419 11:48:12 -- common/autotest_common.sh@940 -- # kill -0 76173 00:20:39.419 11:48:12 -- common/autotest_common.sh@941 -- # uname 00:20:39.419 11:48:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.419 11:48:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76173 00:20:39.419 killing process with pid 76173 00:20:39.419 11:48:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:39.419 11:48:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:39.419 11:48:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76173' 00:20:39.419 11:48:12 -- common/autotest_common.sh@955 -- # kill 76173 00:20:39.419 11:48:12 -- common/autotest_common.sh@960 -- # wait 76173 00:20:39.679 11:48:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:39.679 11:48:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:39.679 11:48:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:39.679 11:48:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.679 11:48:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:39.679 11:48:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.679 11:48:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.679 11:48:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.679 11:48:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:39.679 00:20:39.679 real 0m5.970s 00:20:39.679 user 0m19.979s 00:20:39.679 sys 0m1.181s 00:20:39.679 11:48:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:39.679 11:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:39.679 ************************************ 00:20:39.679 END TEST nvmf_nmic 00:20:39.679 ************************************ 00:20:39.679 11:48:12 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:39.679 11:48:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:39.679 11:48:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:39.679 11:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:39.679 ************************************ 00:20:39.679 START TEST nvmf_fio_target 00:20:39.679 ************************************ 00:20:39.679 11:48:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:39.939 * Looking for test storage... 00:20:39.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:39.939 11:48:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:39.939 11:48:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:39.940 11:48:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:39.940 11:48:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:39.940 11:48:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:39.940 11:48:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:39.940 11:48:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:39.940 11:48:12 -- scripts/common.sh@335 -- # IFS=.-: 00:20:39.940 11:48:12 -- scripts/common.sh@335 -- # read -ra ver1 00:20:39.940 11:48:12 -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.940 11:48:12 -- scripts/common.sh@336 -- # read -ra ver2 00:20:39.940 11:48:12 -- scripts/common.sh@337 -- # local 'op=<' 00:20:39.940 11:48:12 -- scripts/common.sh@339 -- # ver1_l=2 00:20:39.940 11:48:12 -- scripts/common.sh@340 -- # ver2_l=1 00:20:39.940 11:48:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:39.940 11:48:12 -- scripts/common.sh@343 -- # case "$op" in 00:20:39.940 11:48:12 -- scripts/common.sh@344 -- # : 1 00:20:39.940 11:48:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:39.940 11:48:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.940 11:48:12 -- scripts/common.sh@364 -- # decimal 1 00:20:39.940 11:48:12 -- scripts/common.sh@352 -- # local d=1 00:20:39.940 11:48:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.940 11:48:12 -- scripts/common.sh@354 -- # echo 1 00:20:39.940 11:48:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:39.940 11:48:12 -- scripts/common.sh@365 -- # decimal 2 00:20:39.940 11:48:12 -- scripts/common.sh@352 -- # local d=2 00:20:39.940 11:48:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.940 11:48:12 -- scripts/common.sh@354 -- # echo 2 00:20:39.940 11:48:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:39.940 11:48:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:39.940 11:48:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:39.940 11:48:12 -- scripts/common.sh@367 -- # return 0 00:20:39.940 11:48:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.940 11:48:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.940 --rc genhtml_branch_coverage=1 00:20:39.940 --rc genhtml_function_coverage=1 00:20:39.940 --rc genhtml_legend=1 00:20:39.940 --rc geninfo_all_blocks=1 00:20:39.940 --rc geninfo_unexecuted_blocks=1 00:20:39.940 00:20:39.940 ' 00:20:39.940 11:48:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.940 --rc genhtml_branch_coverage=1 00:20:39.940 --rc genhtml_function_coverage=1 00:20:39.940 --rc genhtml_legend=1 00:20:39.940 --rc geninfo_all_blocks=1 00:20:39.940 --rc geninfo_unexecuted_blocks=1 00:20:39.940 00:20:39.940 ' 00:20:39.940 11:48:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.940 --rc genhtml_branch_coverage=1 00:20:39.940 --rc genhtml_function_coverage=1 00:20:39.940 --rc genhtml_legend=1 00:20:39.940 --rc geninfo_all_blocks=1 00:20:39.940 --rc geninfo_unexecuted_blocks=1 00:20:39.940 00:20:39.940 ' 00:20:39.940 11:48:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.940 --rc genhtml_branch_coverage=1 00:20:39.940 --rc genhtml_function_coverage=1 00:20:39.940 --rc genhtml_legend=1 00:20:39.940 --rc geninfo_all_blocks=1 00:20:39.940 --rc geninfo_unexecuted_blocks=1 00:20:39.940 00:20:39.940 ' 00:20:39.940 11:48:12 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.940 11:48:12 -- nvmf/common.sh@7 -- # uname -s 00:20:39.940 11:48:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.940 11:48:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.940 11:48:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.940 11:48:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.940 11:48:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.940 11:48:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.940 11:48:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.940 11:48:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.940 11:48:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.940 11:48:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.940 11:48:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:20:39.940 11:48:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:20:39.940 11:48:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.940 11:48:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.940 11:48:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.940 11:48:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.940 11:48:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.940 11:48:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.940 11:48:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.940 11:48:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.940 11:48:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.940 11:48:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.940 11:48:12 -- paths/export.sh@5 -- # export PATH 00:20:39.940 11:48:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.940 11:48:12 -- nvmf/common.sh@46 -- # : 0 00:20:39.940 11:48:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:39.940 11:48:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:39.940 11:48:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:39.940 11:48:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.940 11:48:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.940 11:48:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:39.940 11:48:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:39.940 11:48:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:39.940 11:48:12 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.940 11:48:12 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.940 11:48:12 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.940 11:48:12 -- target/fio.sh@16 -- # nvmftestinit 00:20:39.940 11:48:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:39.940 11:48:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.940 11:48:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:39.940 11:48:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:39.940 11:48:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:39.940 11:48:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.940 11:48:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.940 11:48:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.940 11:48:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:39.940 11:48:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:39.940 11:48:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:39.940 11:48:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:39.940 11:48:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:39.940 11:48:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:39.940 11:48:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.940 11:48:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.940 11:48:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:39.940 11:48:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:39.940 11:48:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.940 11:48:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.940 11:48:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.940 11:48:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.940 11:48:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.940 11:48:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.941 11:48:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.941 11:48:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.941 11:48:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:39.941 11:48:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:39.941 Cannot find device "nvmf_tgt_br" 00:20:39.941 11:48:12 -- nvmf/common.sh@154 -- # true 00:20:39.941 11:48:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.941 Cannot find device "nvmf_tgt_br2" 00:20:39.941 11:48:12 -- nvmf/common.sh@155 -- # true 00:20:39.941 11:48:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:39.941 11:48:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:39.941 Cannot find device "nvmf_tgt_br" 00:20:39.941 11:48:12 -- nvmf/common.sh@157 -- # true 00:20:39.941 11:48:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:40.200 Cannot find device "nvmf_tgt_br2" 00:20:40.200 11:48:12 -- nvmf/common.sh@158 -- # true 00:20:40.200 11:48:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:40.200 11:48:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:40.200 11:48:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.200 11:48:13 -- nvmf/common.sh@161 -- # true 00:20:40.200 11:48:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.200 11:48:13 -- nvmf/common.sh@162 -- # true 00:20:40.200 11:48:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.200 11:48:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.200 11:48:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.200 11:48:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.200 11:48:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.200 11:48:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.200 11:48:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.200 11:48:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.200 11:48:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.200 11:48:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:40.200 11:48:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:40.200 11:48:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:40.200 11:48:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:40.200 11:48:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.200 11:48:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.200 11:48:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.200 11:48:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:40.200 11:48:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:40.200 11:48:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.200 11:48:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.459 11:48:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.459 11:48:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.459 11:48:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.459 11:48:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:40.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:40.459 00:20:40.459 --- 10.0.0.2 ping statistics --- 00:20:40.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.459 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:40.459 11:48:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:40.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:20:40.459 00:20:40.459 --- 10.0.0.3 ping statistics --- 00:20:40.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.459 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:40.459 11:48:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:20:40.459 00:20:40.459 --- 10.0.0.1 ping statistics --- 00:20:40.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.459 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:40.459 11:48:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.459 11:48:13 -- nvmf/common.sh@421 -- # return 0 00:20:40.459 11:48:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.459 11:48:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.459 11:48:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.459 11:48:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.459 11:48:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.459 11:48:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.459 11:48:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:40.459 11:48:13 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:40.459 11:48:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.459 11:48:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.459 11:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:40.459 11:48:13 -- nvmf/common.sh@469 -- # nvmfpid=76463 00:20:40.459 11:48:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.459 11:48:13 -- nvmf/common.sh@470 -- # waitforlisten 76463 00:20:40.460 11:48:13 -- common/autotest_common.sh@829 -- # '[' -z 76463 ']' 00:20:40.460 11:48:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.460 11:48:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.460 11:48:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.460 11:48:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.460 11:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:40.460 [2024-11-20 11:48:13.398696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.460 [2024-11-20 11:48:13.398747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.719 [2024-11-20 11:48:13.535363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.719 [2024-11-20 11:48:13.618834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.719 [2024-11-20 11:48:13.618959] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.719 [2024-11-20 11:48:13.618965] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.719 [2024-11-20 11:48:13.618971] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.719 [2024-11-20 11:48:13.619085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.719 [2024-11-20 11:48:13.619291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.719 [2024-11-20 11:48:13.619422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.719 [2024-11-20 11:48:13.619423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.289 11:48:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.289 11:48:14 -- common/autotest_common.sh@862 -- # return 0 00:20:41.289 11:48:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:41.289 11:48:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.289 11:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:41.289 11:48:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.289 11:48:14 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:41.549 [2024-11-20 11:48:14.456303] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.549 11:48:14 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:41.810 11:48:14 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:41.810 11:48:14 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.069 11:48:14 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:42.069 11:48:14 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.329 11:48:15 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:42.329 11:48:15 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.329 11:48:15 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:42.329 11:48:15 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:42.588 11:48:15 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.847 11:48:15 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:42.847 11:48:15 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.107 11:48:15 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:43.107 11:48:15 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.365 11:48:16 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:43.365 11:48:16 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:43.365 11:48:16 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:43.625 11:48:16 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:43.625 11:48:16 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.883 11:48:16 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:43.883 11:48:16 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.883 11:48:16 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.143 [2024-11-20 11:48:17.101354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.143 11:48:17 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:44.402 11:48:17 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:44.662 11:48:17 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:44.662 11:48:17 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:44.662 11:48:17 -- common/autotest_common.sh@1187 -- # local i=0 00:20:44.662 11:48:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:44.662 11:48:17 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:20:44.662 11:48:17 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:20:44.662 11:48:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:47.202 11:48:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:47.202 11:48:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:47.202 11:48:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:47.202 11:48:19 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:20:47.202 11:48:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.202 11:48:19 -- common/autotest_common.sh@1197 -- # return 0 00:20:47.202 11:48:19 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:47.202 [global] 00:20:47.202 thread=1 00:20:47.202 invalidate=1 00:20:47.202 rw=write 00:20:47.202 time_based=1 00:20:47.202 runtime=1 00:20:47.202 ioengine=libaio 00:20:47.202 direct=1 00:20:47.202 bs=4096 00:20:47.202 iodepth=1 00:20:47.202 norandommap=0 00:20:47.202 numjobs=1 00:20:47.202 00:20:47.202 verify_dump=1 00:20:47.202 verify_backlog=512 00:20:47.202 verify_state_save=0 00:20:47.202 do_verify=1 00:20:47.202 verify=crc32c-intel 00:20:47.202 [job0] 00:20:47.202 filename=/dev/nvme0n1 00:20:47.202 [job1] 00:20:47.202 filename=/dev/nvme0n2 00:20:47.202 [job2] 00:20:47.202 filename=/dev/nvme0n3 00:20:47.202 [job3] 00:20:47.202 filename=/dev/nvme0n4 00:20:47.202 Could not set queue depth (nvme0n1) 00:20:47.202 Could not set queue depth (nvme0n2) 00:20:47.202 Could not set queue depth (nvme0n3) 00:20:47.202 Could not set queue depth (nvme0n4) 00:20:47.202 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.202 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.202 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.202 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.202 fio-3.35 00:20:47.202 Starting 4 threads 00:20:48.199 00:20:48.199 job0: (groupid=0, jobs=1): err= 0: pid=76753: Wed Nov 20 11:48:21 2024 00:20:48.199 read: IOPS=1651, BW=6605KiB/s (6764kB/s)(6612KiB/1001msec) 00:20:48.199 slat (nsec): min=8357, max=68726, avg=20268.70, stdev=9115.51 00:20:48.199 clat (usec): min=132, max=6273, avg=267.73, stdev=154.22 00:20:48.199 lat (usec): min=142, max=6301, avg=288.00, stdev=155.38 00:20:48.199 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 167], 5.00th=[ 200], 10.00th=[ 217], 20.00th=[ 231], 00:20:48.200 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 273], 00:20:48.200 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 343], 00:20:48.200 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 437], 99.95th=[ 6259], 00:20:48.200 | 99.99th=[ 6259] 00:20:48.200 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:48.200 slat (usec): min=12, max=152, avg=30.37, stdev=11.69 00:20:48.200 clat (usec): min=90, max=396, avg=221.26, stdev=37.12 00:20:48.200 lat (usec): min=112, max=424, avg=251.63, stdev=40.54 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 117], 5.00th=[ 159], 10.00th=[ 180], 20.00th=[ 196], 00:20:48.200 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:20:48.200 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 285], 00:20:48.200 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 347], 99.95th=[ 351], 00:20:48.200 | 99.99th=[ 396] 00:20:48.200 bw ( KiB/s): min= 8192, max= 8192, per=24.45%, avg=8192.00, stdev= 0.00, samples=1 00:20:48.200 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:48.200 lat (usec) : 100=0.16%, 250=64.23%, 500=35.58% 00:20:48.200 lat (msec) : 10=0.03% 00:20:48.200 cpu : usr=1.30%, sys=7.80%, ctx=3701, majf=0, minf=11 00:20:48.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 issued rwts: total=1653,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:48.200 job1: (groupid=0, jobs=1): err= 0: pid=76754: Wed Nov 20 11:48:21 2024 00:20:48.200 read: IOPS=1660, BW=6641KiB/s (6801kB/s)(6648KiB/1001msec) 00:20:48.200 slat (nsec): min=10818, max=76556, avg=21086.83, stdev=8422.39 00:20:48.200 clat (usec): min=133, max=410, avg=263.93, stdev=45.91 00:20:48.200 lat (usec): min=149, max=440, avg=285.02, stdev=48.39 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 155], 5.00th=[ 196], 10.00th=[ 212], 20.00th=[ 229], 00:20:48.200 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 273], 00:20:48.200 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 343], 00:20:48.200 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 408], 99.95th=[ 412], 00:20:48.200 | 99.99th=[ 412] 00:20:48.200 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:48.200 slat (usec): min=10, max=135, avg=30.29, stdev=10.75 00:20:48.200 clat (usec): min=87, max=3422, avg=222.36, stdev=81.45 00:20:48.200 lat (usec): min=102, max=3453, avg=252.65, stdev=82.63 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 116], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 196], 00:20:48.200 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:20:48.200 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 285], 00:20:48.200 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 363], 99.95th=[ 865], 00:20:48.200 | 99.99th=[ 3425] 00:20:48.200 bw ( KiB/s): min= 8192, max= 8192, per=24.45%, avg=8192.00, stdev= 0.00, samples=1 00:20:48.200 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:48.200 lat (usec) : 100=0.19%, 250=64.53%, 500=35.23%, 1000=0.03% 00:20:48.200 lat (msec) : 4=0.03% 00:20:48.200 cpu : usr=1.50%, sys=7.70%, ctx=3712, majf=0, minf=12 00:20:48.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 issued rwts: total=1662,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:48.200 job2: (groupid=0, jobs=1): err= 0: pid=76755: Wed Nov 20 11:48:21 2024 00:20:48.200 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:20:48.200 slat (nsec): min=8886, max=41798, avg=12795.83, stdev=3342.44 00:20:48.200 clat (usec): min=129, max=652, avg=240.49, stdev=34.10 00:20:48.200 lat (usec): min=139, max=668, avg=253.28, stdev=34.59 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 151], 5.00th=[ 186], 10.00th=[ 206], 20.00th=[ 221], 00:20:48.200 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:20:48.200 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 297], 00:20:48.200 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 392], 00:20:48.200 | 99.99th=[ 652] 00:20:48.200 write: IOPS=2141, BW=8567KiB/s (8773kB/s)(8576KiB/1001msec); 0 zone resets 00:20:48.200 slat (usec): min=12, max=116, avg=22.84, stdev= 9.04 00:20:48.200 clat (usec): min=99, max=1399, avg=199.16, stdev=49.38 00:20:48.200 lat (usec): min=112, max=1415, avg=221.99, stdev=50.64 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 113], 5.00th=[ 130], 10.00th=[ 143], 20.00th=[ 165], 00:20:48.200 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 210], 00:20:48.200 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 255], 00:20:48.200 | 99.00th=[ 293], 99.50th=[ 330], 99.90th=[ 627], 99.95th=[ 775], 00:20:48.200 | 99.99th=[ 1401] 00:20:48.200 bw ( KiB/s): min= 8200, max= 8200, per=24.47%, avg=8200.00, stdev= 0.00, samples=1 00:20:48.200 iops : min= 2050, max= 2050, avg=2050.00, stdev= 0.00, samples=1 00:20:48.200 lat (usec) : 100=0.02%, 250=80.96%, 500=18.89%, 750=0.07%, 1000=0.02% 00:20:48.200 lat (msec) : 2=0.02% 00:20:48.200 cpu : usr=0.80%, sys=5.50%, ctx=4196, majf=0, minf=11 00:20:48.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 issued rwts: total=2048,2144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:48.200 job3: (groupid=0, jobs=1): err= 0: pid=76756: Wed Nov 20 11:48:21 2024 00:20:48.200 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:20:48.200 slat (nsec): min=9796, max=39328, avg=14591.02, stdev=3685.87 00:20:48.200 clat (usec): min=134, max=560, avg=239.09, stdev=33.54 00:20:48.200 lat (usec): min=146, max=572, avg=253.68, stdev=33.82 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 157], 5.00th=[ 184], 10.00th=[ 204], 20.00th=[ 219], 00:20:48.200 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:20:48.200 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 297], 00:20:48.200 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 363], 99.95th=[ 408], 00:20:48.200 | 99.99th=[ 562] 00:20:48.200 write: IOPS=2142, BW=8571KiB/s (8777kB/s)(8580KiB/1001msec); 0 zone resets 00:20:48.200 slat (usec): min=13, max=133, avg=23.57, stdev=10.87 00:20:48.200 clat (usec): min=89, max=358, avg=197.68, stdev=36.74 00:20:48.200 lat (usec): min=105, max=385, avg=221.24, stdev=39.18 00:20:48.200 clat percentiles (usec): 00:20:48.200 | 1.00th=[ 119], 5.00th=[ 135], 10.00th=[ 147], 20.00th=[ 165], 00:20:48.200 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 208], 00:20:48.200 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 253], 00:20:48.200 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 338], 99.95th=[ 347], 00:20:48.200 | 99.99th=[ 359] 00:20:48.200 bw ( KiB/s): min= 8208, max= 8208, per=24.50%, avg=8208.00, stdev= 0.00, samples=1 00:20:48.200 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:20:48.200 lat (usec) : 100=0.05%, 250=82.18%, 500=17.74%, 750=0.02% 00:20:48.200 cpu : usr=1.30%, sys=5.30%, ctx=4193, majf=0, minf=13 00:20:48.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.200 issued rwts: total=2048,2145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:48.200 00:20:48.200 Run status group 0 (all jobs): 00:20:48.200 READ: bw=28.9MiB/s (30.3MB/s), 6605KiB/s-8184KiB/s (6764kB/s-8380kB/s), io=28.9MiB (30.4MB), run=1001-1001msec 00:20:48.200 WRITE: bw=32.7MiB/s (34.3MB/s), 8184KiB/s-8571KiB/s (8380kB/s-8777kB/s), io=32.8MiB (34.3MB), run=1001-1001msec 00:20:48.200 00:20:48.200 Disk stats (read/write): 00:20:48.200 nvme0n1: ios=1586/1708, merge=0/0, ticks=429/411, in_queue=840, util=89.97% 00:20:48.200 nvme0n2: ios=1585/1718, merge=0/0, ticks=430/407, in_queue=837, util=90.33% 00:20:48.200 nvme0n3: ios=1715/2048, merge=0/0, ticks=470/428, in_queue=898, util=95.22% 00:20:48.200 nvme0n4: ios=1719/2048, merge=0/0, ticks=465/434, in_queue=899, util=95.61% 00:20:48.200 11:48:21 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:48.200 [global] 00:20:48.200 thread=1 00:20:48.200 invalidate=1 00:20:48.200 rw=randwrite 00:20:48.200 time_based=1 00:20:48.200 runtime=1 00:20:48.200 ioengine=libaio 00:20:48.200 direct=1 00:20:48.200 bs=4096 00:20:48.200 iodepth=1 00:20:48.200 norandommap=0 00:20:48.200 numjobs=1 00:20:48.200 00:20:48.200 verify_dump=1 00:20:48.200 verify_backlog=512 00:20:48.200 verify_state_save=0 00:20:48.200 do_verify=1 00:20:48.200 verify=crc32c-intel 00:20:48.200 [job0] 00:20:48.200 filename=/dev/nvme0n1 00:20:48.200 [job1] 00:20:48.200 filename=/dev/nvme0n2 00:20:48.200 [job2] 00:20:48.200 filename=/dev/nvme0n3 00:20:48.200 [job3] 00:20:48.200 filename=/dev/nvme0n4 00:20:48.200 Could not set queue depth (nvme0n1) 00:20:48.200 Could not set queue depth (nvme0n2) 00:20:48.200 Could not set queue depth (nvme0n3) 00:20:48.200 Could not set queue depth (nvme0n4) 00:20:48.461 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.461 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.461 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.461 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.461 fio-3.35 00:20:48.461 Starting 4 threads 00:20:49.839 00:20:49.839 job0: (groupid=0, jobs=1): err= 0: pid=76809: Wed Nov 20 11:48:22 2024 00:20:49.839 read: IOPS=1676, BW=6705KiB/s (6866kB/s)(6712KiB/1001msec) 00:20:49.839 slat (usec): min=8, max=117, avg=18.33, stdev= 6.76 00:20:49.839 clat (usec): min=150, max=454, avg=266.60, stdev=32.12 00:20:49.839 lat (usec): min=160, max=473, avg=284.92, stdev=33.88 00:20:49.839 clat percentiles (usec): 00:20:49.839 | 1.00th=[ 182], 5.00th=[ 208], 10.00th=[ 227], 20.00th=[ 245], 00:20:49.839 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:20:49.839 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:20:49.839 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 453], 00:20:49.839 | 99.99th=[ 453] 00:20:49.839 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:49.839 slat (usec): min=14, max=113, avg=30.45, stdev=10.95 00:20:49.839 clat (usec): min=110, max=2274, avg=220.28, stdev=63.05 00:20:49.839 lat (usec): min=130, max=2305, avg=250.73, stdev=66.56 00:20:49.839 clat percentiles (usec): 00:20:49.839 | 1.00th=[ 137], 5.00th=[ 161], 10.00th=[ 176], 20.00th=[ 190], 00:20:49.839 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 227], 00:20:49.839 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 281], 00:20:49.839 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 701], 99.95th=[ 1303], 00:20:49.839 | 99.99th=[ 2278] 00:20:49.839 bw ( KiB/s): min= 8192, max= 8192, per=29.33%, avg=8192.00, stdev= 0.00, samples=1 00:20:49.840 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:49.840 lat (usec) : 250=56.98%, 500=42.94%, 750=0.03% 00:20:49.840 lat (msec) : 2=0.03%, 4=0.03% 00:20:49.840 cpu : usr=1.70%, sys=7.20%, ctx=3728, majf=0, minf=13 00:20:49.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 issued rwts: total=1678,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.840 job1: (groupid=0, jobs=1): err= 0: pid=76810: Wed Nov 20 11:48:22 2024 00:20:49.840 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:49.840 slat (nsec): min=9655, max=58251, avg=15645.10, stdev=8229.25 00:20:49.840 clat (usec): min=124, max=570, avg=297.94, stdev=69.70 00:20:49.840 lat (usec): min=137, max=603, avg=313.58, stdev=73.58 00:20:49.840 clat percentiles (usec): 00:20:49.840 | 1.00th=[ 137], 5.00th=[ 186], 10.00th=[ 202], 20.00th=[ 247], 00:20:49.840 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 318], 00:20:49.840 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 429], 00:20:49.840 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 553], 99.95th=[ 570], 00:20:49.840 | 99.99th=[ 570] 00:20:49.840 write: IOPS=1822, BW=7289KiB/s (7464kB/s)(7296KiB/1001msec); 0 zone resets 00:20:49.840 slat (usec): min=14, max=114, avg=37.63, stdev=12.37 00:20:49.840 clat (usec): min=105, max=3482, avg=242.88, stdev=100.10 00:20:49.840 lat (usec): min=120, max=3548, avg=280.51, stdev=102.62 00:20:49.840 clat percentiles (usec): 00:20:49.840 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 176], 00:20:49.840 | 30.00th=[ 202], 40.00th=[ 227], 50.00th=[ 245], 60.00th=[ 262], 00:20:49.840 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 334], 00:20:49.840 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 1012], 99.95th=[ 3490], 00:20:49.840 | 99.99th=[ 3490] 00:20:49.840 bw ( KiB/s): min= 8192, max= 8192, per=29.33%, avg=8192.00, stdev= 0.00, samples=1 00:20:49.840 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:49.840 lat (usec) : 250=37.89%, 500=61.49%, 750=0.54%, 1000=0.03% 00:20:49.840 lat (msec) : 2=0.03%, 4=0.03% 00:20:49.840 cpu : usr=1.10%, sys=6.70%, ctx=3362, majf=0, minf=11 00:20:49.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 issued rwts: total=1536,1824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.840 job2: (groupid=0, jobs=1): err= 0: pid=76811: Wed Nov 20 11:48:22 2024 00:20:49.840 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:49.840 slat (nsec): min=9299, max=96081, avg=31135.54, stdev=8355.12 00:20:49.840 clat (usec): min=204, max=486, avg=304.01, stdev=36.20 00:20:49.840 lat (usec): min=235, max=565, avg=335.15, stdev=37.59 00:20:49.840 clat percentiles (usec): 00:20:49.840 | 1.00th=[ 227], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 273], 00:20:49.840 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:20:49.840 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 363], 00:20:49.840 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 420], 99.95th=[ 486], 00:20:49.840 | 99.99th=[ 486] 00:20:49.840 write: IOPS=1580, BW=6322KiB/s (6473kB/s)(6328KiB/1001msec); 0 zone resets 00:20:49.840 slat (usec): min=11, max=141, avg=41.71, stdev=13.48 00:20:49.840 clat (usec): min=152, max=4474, avg=257.96, stdev=110.57 00:20:49.840 lat (usec): min=185, max=4522, avg=299.67, stdev=112.10 00:20:49.840 clat percentiles (usec): 00:20:49.840 | 1.00th=[ 180], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 231], 00:20:49.840 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:20:49.840 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 306], 00:20:49.840 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 4490], 00:20:49.840 | 99.99th=[ 4490] 00:20:49.840 bw ( KiB/s): min= 8192, max= 8192, per=29.33%, avg=8192.00, stdev= 0.00, samples=1 00:20:49.840 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:49.840 lat (usec) : 250=24.82%, 500=75.14% 00:20:49.840 lat (msec) : 10=0.03% 00:20:49.840 cpu : usr=2.10%, sys=8.90%, ctx=3118, majf=0, minf=13 00:20:49.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 issued rwts: total=1536,1582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.840 job3: (groupid=0, jobs=1): err= 0: pid=76812: Wed Nov 20 11:48:22 2024 00:20:49.840 read: IOPS=1202, BW=4811KiB/s (4927kB/s)(4816KiB/1001msec) 00:20:49.840 slat (nsec): min=8942, max=88901, avg=30334.18, stdev=10415.09 00:20:49.840 clat (usec): min=208, max=5628, avg=410.24, stdev=175.12 00:20:49.840 lat (usec): min=254, max=5661, avg=440.57, stdev=177.46 00:20:49.840 clat percentiles (usec): 00:20:49.840 | 1.00th=[ 247], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 314], 00:20:49.840 | 30.00th=[ 338], 40.00th=[ 367], 50.00th=[ 416], 60.00th=[ 449], 00:20:49.840 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 515], 95.00th=[ 529], 00:20:49.840 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 807], 99.95th=[ 5604], 00:20:49.840 | 99.99th=[ 5604] 00:20:49.840 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:20:49.840 slat (usec): min=13, max=150, avg=44.73, stdev=12.64 00:20:49.840 clat (usec): min=106, max=952, avg=254.99, stdev=55.62 00:20:49.840 lat (usec): min=124, max=1007, avg=299.73, stdev=58.76 00:20:49.840 clat percentiles (usec): 00:20:49.840 | 1.00th=[ 145], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 215], 00:20:49.840 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 273], 00:20:49.840 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 326], 00:20:49.840 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 922], 99.95th=[ 955], 00:20:49.840 | 99.99th=[ 955] 00:20:49.840 bw ( KiB/s): min= 7160, max= 7160, per=25.63%, avg=7160.00, stdev= 0.00, samples=1 00:20:49.840 iops : min= 1790, max= 1790, avg=1790.00, stdev= 0.00, samples=1 00:20:49.840 lat (usec) : 250=25.47%, 500=67.19%, 750=7.19%, 1000=0.11% 00:20:49.840 lat (msec) : 10=0.04% 00:20:49.840 cpu : usr=1.40%, sys=8.40%, ctx=2757, majf=0, minf=11 00:20:49.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.840 issued rwts: total=1204,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.840 00:20:49.840 Run status group 0 (all jobs): 00:20:49.840 READ: bw=23.2MiB/s (24.4MB/s), 4811KiB/s-6705KiB/s (4927kB/s-6866kB/s), io=23.3MiB (24.4MB), run=1001-1001msec 00:20:49.840 WRITE: bw=27.3MiB/s (28.6MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=27.3MiB (28.6MB), run=1001-1001msec 00:20:49.840 00:20:49.840 Disk stats (read/write): 00:20:49.840 nvme0n1: ios=1586/1725, merge=0/0, ticks=453/394, in_queue=847, util=90.57% 00:20:49.840 nvme0n2: ios=1445/1536, merge=0/0, ticks=452/411, in_queue=863, util=90.43% 00:20:49.840 nvme0n3: ios=1290/1536, merge=0/0, ticks=412/409, in_queue=821, util=91.67% 00:20:49.840 nvme0n4: ios=1067/1458, merge=0/0, ticks=431/389, in_queue=820, util=91.45% 00:20:49.840 11:48:22 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:49.840 [global] 00:20:49.840 thread=1 00:20:49.840 invalidate=1 00:20:49.840 rw=write 00:20:49.840 time_based=1 00:20:49.840 runtime=1 00:20:49.840 ioengine=libaio 00:20:49.840 direct=1 00:20:49.840 bs=4096 00:20:49.840 iodepth=128 00:20:49.840 norandommap=0 00:20:49.840 numjobs=1 00:20:49.840 00:20:49.840 verify_dump=1 00:20:49.840 verify_backlog=512 00:20:49.840 verify_state_save=0 00:20:49.840 do_verify=1 00:20:49.840 verify=crc32c-intel 00:20:49.840 [job0] 00:20:49.840 filename=/dev/nvme0n1 00:20:49.840 [job1] 00:20:49.840 filename=/dev/nvme0n2 00:20:49.840 [job2] 00:20:49.840 filename=/dev/nvme0n3 00:20:49.840 [job3] 00:20:49.840 filename=/dev/nvme0n4 00:20:49.840 Could not set queue depth (nvme0n1) 00:20:49.840 Could not set queue depth (nvme0n2) 00:20:49.840 Could not set queue depth (nvme0n3) 00:20:49.840 Could not set queue depth (nvme0n4) 00:20:49.840 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.840 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.840 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.840 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.840 fio-3.35 00:20:49.840 Starting 4 threads 00:20:51.219 00:20:51.219 job0: (groupid=0, jobs=1): err= 0: pid=76871: Wed Nov 20 11:48:23 2024 00:20:51.219 read: IOPS=2804, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1003msec) 00:20:51.219 slat (usec): min=4, max=18352, avg=161.00, stdev=974.50 00:20:51.219 clat (usec): min=1091, max=50733, avg=21074.87, stdev=8100.83 00:20:51.219 lat (usec): min=5827, max=50778, avg=21235.87, stdev=8174.59 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[ 7504], 5.00th=[11863], 10.00th=[14353], 20.00th=[15401], 00:20:51.219 | 30.00th=[16188], 40.00th=[16909], 50.00th=[17433], 60.00th=[18482], 00:20:51.219 | 70.00th=[25297], 80.00th=[29754], 90.00th=[35390], 95.00th=[36439], 00:20:51.219 | 99.00th=[40633], 99.50th=[40633], 99.90th=[43779], 99.95th=[47449], 00:20:51.219 | 99.99th=[50594] 00:20:51.219 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:20:51.219 slat (usec): min=8, max=17281, avg=168.09, stdev=1080.38 00:20:51.219 clat (usec): min=8068, max=51576, avg=21777.79, stdev=8311.63 00:20:51.219 lat (usec): min=8100, max=51617, avg=21945.88, stdev=8400.67 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[ 9896], 5.00th=[12256], 10.00th=[14746], 20.00th=[15795], 00:20:51.219 | 30.00th=[16188], 40.00th=[16581], 50.00th=[17433], 60.00th=[19006], 00:20:51.219 | 70.00th=[27919], 80.00th=[32375], 90.00th=[33817], 95.00th=[34866], 00:20:51.219 | 99.00th=[39060], 99.50th=[40109], 99.90th=[47973], 99.95th=[50070], 00:20:51.219 | 99.99th=[51643] 00:20:51.219 bw ( KiB/s): min= 8192, max=16384, per=30.57%, avg=12288.00, stdev=5792.62, samples=2 00:20:51.219 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:20:51.219 lat (msec) : 2=0.02%, 10=1.65%, 20=63.69%, 50=34.61%, 100=0.03% 00:20:51.219 cpu : usr=2.89%, sys=11.88%, ctx=308, majf=0, minf=15 00:20:51.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:20:51.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.219 issued rwts: total=2813,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.219 job1: (groupid=0, jobs=1): err= 0: pid=76872: Wed Nov 20 11:48:23 2024 00:20:51.219 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:20:51.219 slat (usec): min=7, max=14334, avg=135.05, stdev=836.32 00:20:51.219 clat (usec): min=5842, max=53530, avg=16556.68, stdev=7010.77 00:20:51.219 lat (usec): min=5879, max=53550, avg=16691.74, stdev=7091.27 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11076], 20.00th=[11863], 00:20:51.219 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13698], 60.00th=[14746], 00:20:51.219 | 70.00th=[17695], 80.00th=[21103], 90.00th=[26608], 95.00th=[31851], 00:20:51.219 | 99.00th=[43779], 99.50th=[49546], 99.90th=[53740], 99.95th=[53740], 00:20:51.219 | 99.99th=[53740] 00:20:51.219 write: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1009msec); 0 zone resets 00:20:51.219 slat (usec): min=11, max=10769, avg=211.70, stdev=905.59 00:20:51.219 clat (usec): min=5054, max=88389, avg=28851.71, stdev=20533.08 00:20:51.219 lat (usec): min=5096, max=88403, avg=29063.42, stdev=20656.88 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10683], 00:20:51.219 | 30.00th=[13173], 40.00th=[16188], 50.00th=[24773], 60.00th=[27657], 00:20:51.219 | 70.00th=[31065], 80.00th=[47449], 90.00th=[63701], 95.00th=[67634], 00:20:51.219 | 99.00th=[87557], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:20:51.219 | 99.99th=[88605] 00:20:51.219 bw ( KiB/s): min= 6264, max=16416, per=28.21%, avg=11340.00, stdev=7178.55, samples=2 00:20:51.219 iops : min= 1566, max= 4104, avg=2835.00, stdev=1794.64, samples=2 00:20:51.219 lat (msec) : 10=7.54%, 20=51.94%, 50=30.36%, 100=10.17% 00:20:51.219 cpu : usr=2.48%, sys=10.42%, ctx=375, majf=0, minf=10 00:20:51.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:51.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.219 issued rwts: total=2560,2958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.219 job2: (groupid=0, jobs=1): err= 0: pid=76873: Wed Nov 20 11:48:23 2024 00:20:51.219 read: IOPS=1971, BW=7885KiB/s (8074kB/s)(7924KiB/1005msec) 00:20:51.219 slat (usec): min=5, max=26120, avg=240.33, stdev=1282.65 00:20:51.219 clat (usec): min=3621, max=69061, avg=32324.46, stdev=9512.36 00:20:51.219 lat (usec): min=8443, max=77237, avg=32564.80, stdev=9574.27 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[11600], 5.00th=[22414], 10.00th=[24511], 20.00th=[26346], 00:20:51.219 | 30.00th=[27395], 40.00th=[28181], 50.00th=[30016], 60.00th=[31851], 00:20:51.219 | 70.00th=[33162], 80.00th=[37487], 90.00th=[50594], 95.00th=[52167], 00:20:51.219 | 99.00th=[66323], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:20:51.219 | 99.99th=[68682] 00:20:51.219 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:20:51.219 slat (usec): min=3, max=21211, avg=245.63, stdev=1447.25 00:20:51.219 clat (usec): min=16125, max=55677, avg=30518.35, stdev=6248.29 00:20:51.219 lat (usec): min=16156, max=55717, avg=30763.98, stdev=6389.06 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[19268], 5.00th=[21103], 10.00th=[22152], 20.00th=[24249], 00:20:51.219 | 30.00th=[26346], 40.00th=[29492], 50.00th=[31589], 60.00th=[32637], 00:20:51.219 | 70.00th=[33817], 80.00th=[34866], 90.00th=[36963], 95.00th=[38536], 00:20:51.219 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[55837], 00:20:51.219 | 99.99th=[55837] 00:20:51.219 bw ( KiB/s): min= 8192, max= 8192, per=20.38%, avg=8192.00, stdev= 0.00, samples=2 00:20:51.219 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:20:51.219 lat (msec) : 4=0.02%, 10=0.25%, 20=1.91%, 50=91.73%, 100=6.08% 00:20:51.219 cpu : usr=2.09%, sys=7.87%, ctx=425, majf=0, minf=11 00:20:51.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:20:51.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.219 issued rwts: total=1981,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.219 job3: (groupid=0, jobs=1): err= 0: pid=76874: Wed Nov 20 11:48:23 2024 00:20:51.219 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:20:51.219 slat (usec): min=2, max=20282, avg=225.05, stdev=1110.57 00:20:51.219 clat (usec): min=6701, max=84405, avg=29635.52, stdev=14471.76 00:20:51.219 lat (usec): min=6740, max=84461, avg=29860.57, stdev=14576.40 00:20:51.219 clat percentiles (usec): 00:20:51.219 | 1.00th=[ 7439], 5.00th=[16581], 10.00th=[16909], 20.00th=[17957], 00:20:51.219 | 30.00th=[19792], 40.00th=[24249], 50.00th=[27132], 60.00th=[28705], 00:20:51.219 | 70.00th=[31589], 80.00th=[35914], 90.00th=[49546], 95.00th=[67634], 00:20:51.220 | 99.00th=[78119], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:20:51.220 | 99.99th=[84411] 00:20:51.220 write: IOPS=2055, BW=8223KiB/s (8421kB/s)(8248KiB/1003msec); 0 zone resets 00:20:51.220 slat (usec): min=6, max=23981, avg=251.88, stdev=1224.67 00:20:51.220 clat (usec): min=2519, max=63743, avg=30839.29, stdev=11472.02 00:20:51.220 lat (usec): min=2691, max=63778, avg=31091.17, stdev=11548.49 00:20:51.220 clat percentiles (usec): 00:20:51.220 | 1.00th=[14746], 5.00th=[16581], 10.00th=[17433], 20.00th=[22152], 00:20:51.220 | 30.00th=[23987], 40.00th=[25035], 50.00th=[26608], 60.00th=[31065], 00:20:51.220 | 70.00th=[35914], 80.00th=[39584], 90.00th=[48497], 95.00th=[54264], 00:20:51.220 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:20:51.220 | 99.99th=[63701] 00:20:51.220 bw ( KiB/s): min= 8192, max= 8208, per=20.40%, avg=8200.00, stdev=11.31, samples=2 00:20:51.220 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:20:51.220 lat (msec) : 4=0.12%, 10=0.80%, 20=21.17%, 50=69.08%, 100=8.83% 00:20:51.220 cpu : usr=2.10%, sys=7.88%, ctx=434, majf=0, minf=15 00:20:51.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:51.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.220 issued rwts: total=2048,2062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.220 00:20:51.220 Run status group 0 (all jobs): 00:20:51.220 READ: bw=36.4MiB/s (38.2MB/s), 7885KiB/s-11.0MiB/s (8074kB/s-11.5MB/s), io=36.7MiB (38.5MB), run=1003-1009msec 00:20:51.220 WRITE: bw=39.3MiB/s (41.2MB/s), 8151KiB/s-12.0MiB/s (8347kB/s-12.5MB/s), io=39.6MiB (41.5MB), run=1003-1009msec 00:20:51.220 00:20:51.220 Disk stats (read/write): 00:20:51.220 nvme0n1: ios=2362/2560, merge=0/0, ticks=24465/25454, in_queue=49919, util=88.98% 00:20:51.220 nvme0n2: ios=2609/2599, merge=0/0, ticks=39159/63767, in_queue=102926, util=90.74% 00:20:51.220 nvme0n3: ios=1586/1898, merge=0/0, ticks=20673/22280, in_queue=42953, util=89.34% 00:20:51.220 nvme0n4: ios=1566/2048, merge=0/0, ticks=14247/19808, in_queue=34055, util=89.73% 00:20:51.220 11:48:23 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:51.220 [global] 00:20:51.220 thread=1 00:20:51.220 invalidate=1 00:20:51.220 rw=randwrite 00:20:51.220 time_based=1 00:20:51.220 runtime=1 00:20:51.220 ioengine=libaio 00:20:51.220 direct=1 00:20:51.220 bs=4096 00:20:51.220 iodepth=128 00:20:51.220 norandommap=0 00:20:51.220 numjobs=1 00:20:51.220 00:20:51.220 verify_dump=1 00:20:51.220 verify_backlog=512 00:20:51.220 verify_state_save=0 00:20:51.220 do_verify=1 00:20:51.220 verify=crc32c-intel 00:20:51.220 [job0] 00:20:51.220 filename=/dev/nvme0n1 00:20:51.220 [job1] 00:20:51.220 filename=/dev/nvme0n2 00:20:51.220 [job2] 00:20:51.220 filename=/dev/nvme0n3 00:20:51.220 [job3] 00:20:51.220 filename=/dev/nvme0n4 00:20:51.220 Could not set queue depth (nvme0n1) 00:20:51.220 Could not set queue depth (nvme0n2) 00:20:51.220 Could not set queue depth (nvme0n3) 00:20:51.220 Could not set queue depth (nvme0n4) 00:20:51.220 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.220 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.220 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.220 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.220 fio-3.35 00:20:51.220 Starting 4 threads 00:20:52.596 00:20:52.596 job0: (groupid=0, jobs=1): err= 0: pid=76935: Wed Nov 20 11:48:25 2024 00:20:52.596 read: IOPS=2216, BW=8866KiB/s (9079kB/s)(8884KiB/1002msec) 00:20:52.596 slat (usec): min=7, max=7869, avg=197.79, stdev=947.91 00:20:52.596 clat (usec): min=1370, max=39803, avg=24709.37, stdev=5507.14 00:20:52.596 lat (usec): min=1389, max=39825, avg=24907.16, stdev=5527.17 00:20:52.596 clat percentiles (usec): 00:20:52.596 | 1.00th=[ 8160], 5.00th=[17171], 10.00th=[19530], 20.00th=[21890], 00:20:52.596 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23725], 60.00th=[24511], 00:20:52.596 | 70.00th=[25822], 80.00th=[29492], 90.00th=[33424], 95.00th=[35390], 00:20:52.596 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:20:52.596 | 99.99th=[39584] 00:20:52.596 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:20:52.596 slat (usec): min=22, max=11313, avg=209.33, stdev=941.11 00:20:52.596 clat (usec): min=13450, max=47794, avg=27972.74, stdev=6438.25 00:20:52.596 lat (usec): min=13485, max=47832, avg=28182.08, stdev=6480.77 00:20:52.596 clat percentiles (usec): 00:20:52.596 | 1.00th=[16909], 5.00th=[19006], 10.00th=[20055], 20.00th=[23200], 00:20:52.596 | 30.00th=[24511], 40.00th=[25297], 50.00th=[26346], 60.00th=[27657], 00:20:52.596 | 70.00th=[30540], 80.00th=[34866], 90.00th=[38011], 95.00th=[39584], 00:20:52.596 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44303], 99.95th=[45351], 00:20:52.596 | 99.99th=[47973] 00:20:52.596 bw ( KiB/s): min= 8192, max=12288, per=19.62%, avg=10240.00, stdev=2896.31, samples=2 00:20:52.596 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:20:52.596 lat (msec) : 2=0.13%, 10=0.88%, 20=9.31%, 50=89.69% 00:20:52.596 cpu : usr=3.00%, sys=9.59%, ctx=436, majf=0, minf=7 00:20:52.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:52.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.596 issued rwts: total=2221,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.596 job1: (groupid=0, jobs=1): err= 0: pid=76936: Wed Nov 20 11:48:25 2024 00:20:52.596 read: IOPS=3119, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1005msec) 00:20:52.596 slat (usec): min=4, max=7397, avg=133.81, stdev=693.81 00:20:52.596 clat (usec): min=612, max=40087, avg=17652.43, stdev=7181.31 00:20:52.596 lat (usec): min=7345, max=40225, avg=17786.23, stdev=7230.88 00:20:52.596 clat percentiles (usec): 00:20:52.596 | 1.00th=[ 9372], 5.00th=[11731], 10.00th=[13304], 20.00th=[13829], 00:20:52.596 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:20:52.596 | 70.00th=[16057], 80.00th=[17433], 90.00th=[33162], 95.00th=[34341], 00:20:52.596 | 99.00th=[37487], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:20:52.596 | 99.99th=[40109] 00:20:52.596 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:20:52.596 slat (usec): min=6, max=12505, avg=153.94, stdev=753.15 00:20:52.596 clat (usec): min=8819, max=46268, avg=19894.52, stdev=8792.06 00:20:52.596 lat (usec): min=8898, max=46307, avg=20048.46, stdev=8850.56 00:20:52.596 clat percentiles (usec): 00:20:52.596 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[13829], 20.00th=[14877], 00:20:52.596 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:20:52.596 | 70.00th=[16909], 80.00th=[29492], 90.00th=[34866], 95.00th=[39060], 00:20:52.596 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[44827], 00:20:52.596 | 99.99th=[46400] 00:20:52.596 bw ( KiB/s): min=10909, max=17264, per=26.99%, avg=14086.50, stdev=4493.66, samples=2 00:20:52.596 iops : min= 2727, max= 4316, avg=3521.50, stdev=1123.59, samples=2 00:20:52.596 lat (usec) : 750=0.01% 00:20:52.596 lat (msec) : 10=1.95%, 20=76.59%, 50=21.45% 00:20:52.596 cpu : usr=3.29%, sys=12.85%, ctx=502, majf=0, minf=19 00:20:52.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:52.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.596 issued rwts: total=3135,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.596 job2: (groupid=0, jobs=1): err= 0: pid=76937: Wed Nov 20 11:48:25 2024 00:20:52.596 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:20:52.596 slat (usec): min=7, max=23741, avg=184.27, stdev=1309.56 00:20:52.596 clat (usec): min=6893, max=48749, avg=23934.24, stdev=6990.19 00:20:52.596 lat (usec): min=6911, max=48770, avg=24118.51, stdev=7073.22 00:20:52.596 clat percentiles (usec): 00:20:52.596 | 1.00th=[12911], 5.00th=[14615], 10.00th=[17171], 20.00th=[18220], 00:20:52.596 | 30.00th=[19268], 40.00th=[21365], 50.00th=[23725], 60.00th=[24249], 00:20:52.596 | 70.00th=[25560], 80.00th=[28705], 90.00th=[32375], 95.00th=[38011], 00:20:52.596 | 99.00th=[46400], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:20:52.596 | 99.99th=[48497] 00:20:52.596 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1006msec); 0 zone resets 00:20:52.596 slat (usec): min=10, max=22718, avg=164.87, stdev=1203.20 00:20:52.596 clat (usec): min=3173, max=48700, avg=21682.89, stdev=5156.84 00:20:52.596 lat (usec): min=4237, max=51198, avg=21847.75, stdev=5279.76 00:20:52.596 clat percentiles (usec): 00:20:52.597 | 1.00th=[ 6718], 5.00th=[11207], 10.00th=[15008], 20.00th=[19268], 00:20:52.597 | 30.00th=[19530], 40.00th=[20579], 50.00th=[21365], 60.00th=[24249], 00:20:52.597 | 70.00th=[25035], 80.00th=[26346], 90.00th=[27132], 95.00th=[27657], 00:20:52.597 | 99.00th=[28967], 99.50th=[30802], 99.90th=[48497], 99.95th=[48497], 00:20:52.597 | 99.99th=[48497] 00:20:52.597 bw ( KiB/s): min=11048, max=12360, per=22.43%, avg=11704.00, stdev=927.72, samples=2 00:20:52.597 iops : min= 2762, max= 3090, avg=2926.00, stdev=231.93, samples=2 00:20:52.597 lat (msec) : 4=0.02%, 10=2.09%, 20=32.94%, 50=64.96% 00:20:52.597 cpu : usr=2.99%, sys=9.95%, ctx=284, majf=0, minf=10 00:20:52.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:52.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.597 issued rwts: total=2560,3051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.597 job3: (groupid=0, jobs=1): err= 0: pid=76938: Wed Nov 20 11:48:25 2024 00:20:52.597 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:20:52.597 slat (usec): min=7, max=4002, avg=126.02, stdev=534.55 00:20:52.597 clat (usec): min=12221, max=20045, avg=16769.88, stdev=1480.36 00:20:52.597 lat (usec): min=12244, max=20065, avg=16895.90, stdev=1407.64 00:20:52.597 clat percentiles (usec): 00:20:52.597 | 1.00th=[12911], 5.00th=[13960], 10.00th=[14484], 20.00th=[15270], 00:20:52.597 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:20:52.597 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18744], 00:20:52.597 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:20:52.597 | 99.99th=[20055] 00:20:52.597 write: IOPS=3910, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1005msec); 0 zone resets 00:20:52.597 slat (usec): min=12, max=8946, avg=130.81, stdev=539.16 00:20:52.597 clat (usec): min=223, max=29698, avg=16917.55, stdev=2599.19 00:20:52.597 lat (usec): min=4039, max=29740, avg=17048.37, stdev=2593.43 00:20:52.597 clat percentiles (usec): 00:20:52.597 | 1.00th=[ 5800], 5.00th=[13698], 10.00th=[14484], 20.00th=[15008], 00:20:52.597 | 30.00th=[15664], 40.00th=[16450], 50.00th=[17433], 60.00th=[17695], 00:20:52.597 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19006], 95.00th=[19792], 00:20:52.597 | 99.00th=[23987], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:20:52.597 | 99.99th=[29754] 00:20:52.597 bw ( KiB/s): min=14032, max=16384, per=29.14%, avg=15208.00, stdev=1663.12, samples=2 00:20:52.597 iops : min= 3508, max= 4096, avg=3802.00, stdev=415.78, samples=2 00:20:52.597 lat (usec) : 250=0.01% 00:20:52.597 lat (msec) : 10=0.80%, 20=96.75%, 50=2.44% 00:20:52.597 cpu : usr=3.39%, sys=15.64%, ctx=622, majf=0, minf=13 00:20:52.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:52.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.597 issued rwts: total=3584,3930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.597 00:20:52.597 Run status group 0 (all jobs): 00:20:52.597 READ: bw=44.7MiB/s (46.8MB/s), 8866KiB/s-13.9MiB/s (9079kB/s-14.6MB/s), io=44.9MiB (47.1MB), run=1002-1006msec 00:20:52.597 WRITE: bw=51.0MiB/s (53.4MB/s), 9.98MiB/s-15.3MiB/s (10.5MB/s-16.0MB/s), io=51.3MiB (53.8MB), run=1002-1006msec 00:20:52.597 00:20:52.597 Disk stats (read/write): 00:20:52.597 nvme0n1: ios=2098/2079, merge=0/0, ticks=16256/18343, in_queue=34599, util=89.67% 00:20:52.597 nvme0n2: ios=2629/3072, merge=0/0, ticks=18974/22462, in_queue=41436, util=88.98% 00:20:52.597 nvme0n3: ios=2367/2560, merge=0/0, ticks=52993/51611, in_queue=104604, util=90.00% 00:20:52.597 nvme0n4: ios=3078/3444, merge=0/0, ticks=11993/13369, in_queue=25362, util=89.04% 00:20:52.597 11:48:25 -- target/fio.sh@55 -- # sync 00:20:52.597 11:48:25 -- target/fio.sh@59 -- # fio_pid=76952 00:20:52.597 11:48:25 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:52.597 11:48:25 -- target/fio.sh@61 -- # sleep 3 00:20:52.597 [global] 00:20:52.597 thread=1 00:20:52.597 invalidate=1 00:20:52.597 rw=read 00:20:52.597 time_based=1 00:20:52.597 runtime=10 00:20:52.597 ioengine=libaio 00:20:52.597 direct=1 00:20:52.597 bs=4096 00:20:52.597 iodepth=1 00:20:52.597 norandommap=1 00:20:52.597 numjobs=1 00:20:52.597 00:20:52.597 [job0] 00:20:52.597 filename=/dev/nvme0n1 00:20:52.597 [job1] 00:20:52.597 filename=/dev/nvme0n2 00:20:52.597 [job2] 00:20:52.597 filename=/dev/nvme0n3 00:20:52.597 [job3] 00:20:52.597 filename=/dev/nvme0n4 00:20:52.597 Could not set queue depth (nvme0n1) 00:20:52.597 Could not set queue depth (nvme0n2) 00:20:52.597 Could not set queue depth (nvme0n3) 00:20:52.597 Could not set queue depth (nvme0n4) 00:20:52.597 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.597 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.597 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.597 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.597 fio-3.35 00:20:52.597 Starting 4 threads 00:20:55.892 11:48:28 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:55.892 fio: pid=76995, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:55.892 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=57081856, buflen=4096 00:20:55.892 11:48:28 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:55.892 fio: pid=76994, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:55.892 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=26796032, buflen=4096 00:20:55.892 11:48:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:55.892 11:48:28 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:56.161 fio: pid=76992, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:56.161 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50384896, buflen=4096 00:20:56.161 11:48:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:56.161 11:48:28 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:56.161 fio: pid=76993, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:56.161 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54091776, buflen=4096 00:20:56.419 00:20:56.419 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76992: Wed Nov 20 11:48:29 2024 00:20:56.419 read: IOPS=3835, BW=15.0MiB/s (15.7MB/s)(48.1MiB/3207msec) 00:20:56.419 slat (usec): min=6, max=14620, avg=19.08, stdev=212.03 00:20:56.419 clat (usec): min=91, max=2907, avg=240.28, stdev=54.86 00:20:56.419 lat (usec): min=100, max=14882, avg=259.36, stdev=220.93 00:20:56.419 clat percentiles (usec): 00:20:56.419 | 1.00th=[ 115], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 217], 00:20:56.419 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:20:56.419 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:20:56.419 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 515], 99.95th=[ 1029], 00:20:56.419 | 99.99th=[ 2573] 00:20:56.419 bw ( KiB/s): min=15004, max=16104, per=28.63%, avg=15364.67, stdev=415.39, samples=6 00:20:56.419 iops : min= 3751, max= 4026, avg=3841.17, stdev=103.85, samples=6 00:20:56.419 lat (usec) : 100=0.15%, 250=63.54%, 500=36.20%, 750=0.03%, 1000=0.02% 00:20:56.419 lat (msec) : 2=0.03%, 4=0.02% 00:20:56.419 cpu : usr=0.84%, sys=5.15%, ctx=12311, majf=0, minf=1 00:20:56.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.419 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.419 issued rwts: total=12302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.419 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76993: Wed Nov 20 11:48:29 2024 00:20:56.419 read: IOPS=3852, BW=15.0MiB/s (15.8MB/s)(51.6MiB/3428msec) 00:20:56.419 slat (usec): min=9, max=23648, avg=18.41, stdev=312.98 00:20:56.419 clat (nsec): min=1223, max=3553.0k, avg=240427.46, stdev=89746.26 00:20:56.419 lat (usec): min=104, max=23836, avg=258.84, stdev=324.78 00:20:56.419 clat percentiles (usec): 00:20:56.419 | 1.00th=[ 105], 5.00th=[ 115], 10.00th=[ 124], 20.00th=[ 165], 00:20:56.419 | 30.00th=[ 184], 40.00th=[ 206], 50.00th=[ 233], 60.00th=[ 273], 00:20:56.419 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 347], 95.00th=[ 367], 00:20:56.419 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 469], 99.95th=[ 594], 00:20:56.419 | 99.99th=[ 2409] 00:20:56.419 bw ( KiB/s): min=14096, max=14760, per=27.04%, avg=14509.33, stdev=250.37, samples=6 00:20:56.419 iops : min= 3524, max= 3690, avg=3627.33, stdev=62.59, samples=6 00:20:56.419 lat (usec) : 2=0.01%, 100=0.24%, 250=53.95%, 500=45.74%, 750=0.01% 00:20:56.419 lat (usec) : 1000=0.02% 00:20:56.419 lat (msec) : 4=0.02% 00:20:56.419 cpu : usr=0.29%, sys=3.21%, ctx=13221, majf=0, minf=2 00:20:56.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.419 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.419 issued rwts: total=13207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.419 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76994: Wed Nov 20 11:48:29 2024 00:20:56.419 read: IOPS=2161, BW=8645KiB/s (8852kB/s)(25.6MiB/3027msec) 00:20:56.419 slat (usec): min=9, max=12681, avg=36.43, stdev=204.61 00:20:56.419 clat (usec): min=116, max=6901, avg=423.47, stdev=133.65 00:20:56.419 lat (usec): min=128, max=12976, avg=459.90, stdev=243.11 00:20:56.419 clat percentiles (usec): 00:20:56.419 | 1.00th=[ 169], 5.00th=[ 247], 10.00th=[ 277], 20.00th=[ 322], 00:20:56.419 | 30.00th=[ 371], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 469], 00:20:56.419 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 545], 00:20:56.419 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 750], 99.95th=[ 1663], 00:20:56.419 | 99.99th=[ 6915] 00:20:56.419 bw ( KiB/s): min= 8288, max= 8488, per=15.65%, avg=8396.80, stdev=81.51, samples=5 00:20:56.419 iops : min= 2072, max= 2122, avg=2099.20, stdev=20.38, samples=5 00:20:56.419 lat (usec) : 250=5.27%, 500=72.06%, 750=22.56%, 1000=0.02% 00:20:56.419 lat (msec) : 2=0.05%, 4=0.02%, 10=0.02% 00:20:56.419 cpu : usr=0.99%, sys=5.95%, ctx=6545, majf=0, minf=2 00:20:56.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.419 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.419 issued rwts: total=6543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.419 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76995: Wed Nov 20 11:48:29 2024 00:20:56.419 read: IOPS=4885, BW=19.1MiB/s (20.0MB/s)(54.4MiB/2853msec) 00:20:56.419 slat (nsec): min=7101, max=88813, avg=10497.31, stdev=2426.72 00:20:56.419 clat (usec): min=132, max=1498, avg=193.41, stdev=36.16 00:20:56.419 lat (usec): min=142, max=1509, avg=203.91, stdev=36.30 00:20:56.420 clat percentiles (usec): 00:20:56.420 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:20:56.420 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 198], 00:20:56.420 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 247], 00:20:56.420 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 379], 99.95th=[ 502], 00:20:56.420 | 99.99th=[ 1401] 00:20:56.420 bw ( KiB/s): min=19264, max=19808, per=36.58%, avg=19630.40, stdev=242.94, samples=5 00:20:56.420 iops : min= 4816, max= 4952, avg=4907.60, stdev=60.74, samples=5 00:20:56.420 lat (usec) : 250=96.17%, 500=3.77%, 750=0.02%, 1000=0.01% 00:20:56.420 lat (msec) : 2=0.02% 00:20:56.420 cpu : usr=0.39%, sys=3.75%, ctx=13937, majf=0, minf=2 00:20:56.420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.420 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.420 issued rwts: total=13937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.420 00:20:56.420 Run status group 0 (all jobs): 00:20:56.420 READ: bw=52.4MiB/s (54.9MB/s), 8645KiB/s-19.1MiB/s (8852kB/s-20.0MB/s), io=180MiB (188MB), run=2853-3428msec 00:20:56.420 00:20:56.420 Disk stats (read/write): 00:20:56.420 nvme0n1: ios=12015/0, merge=0/0, ticks=2917/0, in_queue=2917, util=95.17% 00:20:56.420 nvme0n2: ios=12880/0, merge=0/0, ticks=3178/0, in_queue=3178, util=94.59% 00:20:56.420 nvme0n3: ios=6134/0, merge=0/0, ticks=2724/0, in_queue=2724, util=96.65% 00:20:56.420 nvme0n4: ios=12883/0, merge=0/0, ticks=2534/0, in_queue=2534, util=96.45% 00:20:56.420 11:48:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:56.420 11:48:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:56.420 11:48:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:56.420 11:48:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:56.678 11:48:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:56.678 11:48:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:56.937 11:48:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:56.937 11:48:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:57.196 11:48:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:57.196 11:48:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:57.455 11:48:30 -- target/fio.sh@69 -- # fio_status=0 00:20:57.455 11:48:30 -- target/fio.sh@70 -- # wait 76952 00:20:57.455 11:48:30 -- target/fio.sh@70 -- # fio_status=4 00:20:57.455 11:48:30 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:57.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:57.455 11:48:30 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:57.455 11:48:30 -- common/autotest_common.sh@1208 -- # local i=0 00:20:57.455 11:48:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:57.455 11:48:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:57.455 11:48:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:57.455 11:48:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:57.455 nvmf hotplug test: fio failed as expected 00:20:57.455 11:48:30 -- common/autotest_common.sh@1220 -- # return 0 00:20:57.455 11:48:30 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:57.455 11:48:30 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:57.455 11:48:30 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.714 11:48:30 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:57.714 11:48:30 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:57.714 11:48:30 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:57.714 11:48:30 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:57.714 11:48:30 -- target/fio.sh@91 -- # nvmftestfini 00:20:57.714 11:48:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:57.714 11:48:30 -- nvmf/common.sh@116 -- # sync 00:20:57.714 11:48:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:57.714 11:48:30 -- nvmf/common.sh@119 -- # set +e 00:20:57.714 11:48:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:57.714 11:48:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:57.714 rmmod nvme_tcp 00:20:57.714 rmmod nvme_fabrics 00:20:57.714 rmmod nvme_keyring 00:20:57.714 11:48:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:57.714 11:48:30 -- nvmf/common.sh@123 -- # set -e 00:20:57.714 11:48:30 -- nvmf/common.sh@124 -- # return 0 00:20:57.714 11:48:30 -- nvmf/common.sh@477 -- # '[' -n 76463 ']' 00:20:57.714 11:48:30 -- nvmf/common.sh@478 -- # killprocess 76463 00:20:57.714 11:48:30 -- common/autotest_common.sh@936 -- # '[' -z 76463 ']' 00:20:57.714 11:48:30 -- common/autotest_common.sh@940 -- # kill -0 76463 00:20:57.714 11:48:30 -- common/autotest_common.sh@941 -- # uname 00:20:57.714 11:48:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.714 11:48:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76463 00:20:57.714 11:48:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:57.714 11:48:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:57.714 killing process with pid 76463 00:20:57.714 11:48:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76463' 00:20:57.714 11:48:30 -- common/autotest_common.sh@955 -- # kill 76463 00:20:57.714 11:48:30 -- common/autotest_common.sh@960 -- # wait 76463 00:20:57.973 11:48:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:57.973 11:48:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:57.973 11:48:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:57.973 11:48:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.973 11:48:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:57.973 11:48:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.973 11:48:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.973 11:48:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.973 11:48:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:57.973 00:20:57.973 real 0m18.274s 00:20:57.973 user 1m10.114s 00:20:57.973 sys 0m7.013s 00:20:57.973 11:48:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:57.973 11:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:57.973 ************************************ 00:20:57.973 END TEST nvmf_fio_target 00:20:57.973 ************************************ 00:20:57.973 11:48:30 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:57.973 11:48:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:57.973 11:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:57.973 11:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:57.973 ************************************ 00:20:57.973 START TEST nvmf_bdevio 00:20:57.973 ************************************ 00:20:57.973 11:48:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:58.233 * Looking for test storage... 00:20:58.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:58.233 11:48:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:58.233 11:48:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:58.233 11:48:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:58.233 11:48:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:58.233 11:48:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:58.233 11:48:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:58.233 11:48:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:58.233 11:48:31 -- scripts/common.sh@335 -- # IFS=.-: 00:20:58.233 11:48:31 -- scripts/common.sh@335 -- # read -ra ver1 00:20:58.233 11:48:31 -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.233 11:48:31 -- scripts/common.sh@336 -- # read -ra ver2 00:20:58.233 11:48:31 -- scripts/common.sh@337 -- # local 'op=<' 00:20:58.233 11:48:31 -- scripts/common.sh@339 -- # ver1_l=2 00:20:58.233 11:48:31 -- scripts/common.sh@340 -- # ver2_l=1 00:20:58.233 11:48:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:58.233 11:48:31 -- scripts/common.sh@343 -- # case "$op" in 00:20:58.233 11:48:31 -- scripts/common.sh@344 -- # : 1 00:20:58.233 11:48:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:58.233 11:48:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.233 11:48:31 -- scripts/common.sh@364 -- # decimal 1 00:20:58.233 11:48:31 -- scripts/common.sh@352 -- # local d=1 00:20:58.233 11:48:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.233 11:48:31 -- scripts/common.sh@354 -- # echo 1 00:20:58.233 11:48:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:58.233 11:48:31 -- scripts/common.sh@365 -- # decimal 2 00:20:58.233 11:48:31 -- scripts/common.sh@352 -- # local d=2 00:20:58.233 11:48:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.233 11:48:31 -- scripts/common.sh@354 -- # echo 2 00:20:58.233 11:48:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:58.233 11:48:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:58.233 11:48:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:58.233 11:48:31 -- scripts/common.sh@367 -- # return 0 00:20:58.233 11:48:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.233 11:48:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.233 --rc genhtml_branch_coverage=1 00:20:58.233 --rc genhtml_function_coverage=1 00:20:58.233 --rc genhtml_legend=1 00:20:58.233 --rc geninfo_all_blocks=1 00:20:58.233 --rc geninfo_unexecuted_blocks=1 00:20:58.233 00:20:58.233 ' 00:20:58.233 11:48:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.233 --rc genhtml_branch_coverage=1 00:20:58.233 --rc genhtml_function_coverage=1 00:20:58.233 --rc genhtml_legend=1 00:20:58.233 --rc geninfo_all_blocks=1 00:20:58.233 --rc geninfo_unexecuted_blocks=1 00:20:58.233 00:20:58.233 ' 00:20:58.233 11:48:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.233 --rc genhtml_branch_coverage=1 00:20:58.233 --rc genhtml_function_coverage=1 00:20:58.233 --rc genhtml_legend=1 00:20:58.233 --rc geninfo_all_blocks=1 00:20:58.233 --rc geninfo_unexecuted_blocks=1 00:20:58.233 00:20:58.233 ' 00:20:58.234 11:48:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:58.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.234 --rc genhtml_branch_coverage=1 00:20:58.234 --rc genhtml_function_coverage=1 00:20:58.234 --rc genhtml_legend=1 00:20:58.234 --rc geninfo_all_blocks=1 00:20:58.234 --rc geninfo_unexecuted_blocks=1 00:20:58.234 00:20:58.234 ' 00:20:58.234 11:48:31 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.234 11:48:31 -- nvmf/common.sh@7 -- # uname -s 00:20:58.234 11:48:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.234 11:48:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.234 11:48:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.234 11:48:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.234 11:48:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.234 11:48:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.234 11:48:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.234 11:48:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.234 11:48:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.234 11:48:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.234 11:48:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:20:58.234 11:48:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:20:58.234 11:48:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.234 11:48:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.234 11:48:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.234 11:48:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.234 11:48:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.234 11:48:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.234 11:48:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.234 11:48:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.234 11:48:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.234 11:48:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.234 11:48:31 -- paths/export.sh@5 -- # export PATH 00:20:58.234 11:48:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.234 11:48:31 -- nvmf/common.sh@46 -- # : 0 00:20:58.234 11:48:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:58.234 11:48:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:58.234 11:48:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:58.234 11:48:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.234 11:48:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.234 11:48:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:58.234 11:48:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:58.234 11:48:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:58.234 11:48:31 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.234 11:48:31 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.234 11:48:31 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:58.234 11:48:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:58.234 11:48:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.234 11:48:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:58.234 11:48:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:58.234 11:48:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:58.234 11:48:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.234 11:48:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.234 11:48:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.234 11:48:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:58.234 11:48:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:58.234 11:48:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:58.234 11:48:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:58.234 11:48:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:58.234 11:48:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:58.234 11:48:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.234 11:48:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.234 11:48:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:58.234 11:48:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:58.234 11:48:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.234 11:48:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.234 11:48:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.234 11:48:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.234 11:48:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.234 11:48:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.234 11:48:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.234 11:48:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.234 11:48:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:58.495 11:48:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:58.495 Cannot find device "nvmf_tgt_br" 00:20:58.495 11:48:31 -- nvmf/common.sh@154 -- # true 00:20:58.495 11:48:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.495 Cannot find device "nvmf_tgt_br2" 00:20:58.496 11:48:31 -- nvmf/common.sh@155 -- # true 00:20:58.496 11:48:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:58.496 11:48:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:58.496 Cannot find device "nvmf_tgt_br" 00:20:58.496 11:48:31 -- nvmf/common.sh@157 -- # true 00:20:58.496 11:48:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:58.496 Cannot find device "nvmf_tgt_br2" 00:20:58.496 11:48:31 -- nvmf/common.sh@158 -- # true 00:20:58.496 11:48:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:58.496 11:48:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:58.496 11:48:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.496 11:48:31 -- nvmf/common.sh@161 -- # true 00:20:58.496 11:48:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.496 11:48:31 -- nvmf/common.sh@162 -- # true 00:20:58.496 11:48:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:58.496 11:48:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:58.496 11:48:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:58.496 11:48:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:58.496 11:48:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:58.496 11:48:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:58.496 11:48:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:58.496 11:48:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:58.496 11:48:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:58.496 11:48:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:58.496 11:48:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:58.496 11:48:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:58.496 11:48:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:58.496 11:48:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:58.496 11:48:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:58.496 11:48:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:58.496 11:48:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:58.496 11:48:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:58.755 11:48:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:58.755 11:48:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:58.755 11:48:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:58.755 11:48:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:58.755 11:48:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:58.755 11:48:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:58.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:20:58.755 00:20:58.755 --- 10.0.0.2 ping statistics --- 00:20:58.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.755 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:20:58.755 11:48:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:58.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:58.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:20:58.756 00:20:58.756 --- 10.0.0.3 ping statistics --- 00:20:58.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.756 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:58.756 11:48:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:58.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:20:58.756 00:20:58.756 --- 10.0.0.1 ping statistics --- 00:20:58.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.756 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:58.756 11:48:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.756 11:48:31 -- nvmf/common.sh@421 -- # return 0 00:20:58.756 11:48:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:58.756 11:48:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.756 11:48:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:58.756 11:48:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:58.756 11:48:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.756 11:48:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:58.756 11:48:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:58.756 11:48:31 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:58.756 11:48:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:58.756 11:48:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.756 11:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:58.756 11:48:31 -- nvmf/common.sh@469 -- # nvmfpid=77316 00:20:58.756 11:48:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:58.756 11:48:31 -- nvmf/common.sh@470 -- # waitforlisten 77316 00:20:58.756 11:48:31 -- common/autotest_common.sh@829 -- # '[' -z 77316 ']' 00:20:58.756 11:48:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.756 11:48:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.756 11:48:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.756 11:48:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.756 11:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:58.756 [2024-11-20 11:48:31.672388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:58.756 [2024-11-20 11:48:31.672797] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.015 [2024-11-20 11:48:31.812512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.015 [2024-11-20 11:48:31.951823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:59.015 [2024-11-20 11:48:31.952171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.015 [2024-11-20 11:48:31.952315] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.015 [2024-11-20 11:48:31.952360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.015 [2024-11-20 11:48:31.952488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.015 [2024-11-20 11:48:31.952708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:59.015 [2024-11-20 11:48:31.952855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:59.015 [2024-11-20 11:48:31.952920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.584 11:48:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.584 11:48:32 -- common/autotest_common.sh@862 -- # return 0 00:20:59.584 11:48:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:59.584 11:48:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.584 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 11:48:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.584 11:48:32 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.584 11:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.584 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:59.584 [2024-11-20 11:48:32.580327] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.584 11:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.584 11:48:32 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:59.584 11:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.584 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:59.844 Malloc0 00:20:59.844 11:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.844 11:48:32 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.844 11:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.844 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:59.844 11:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.844 11:48:32 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.844 11:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.844 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:59.844 11:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.844 11:48:32 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.844 11:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.844 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:59.844 [2024-11-20 11:48:32.654365] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.844 11:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.844 11:48:32 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:59.844 11:48:32 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:59.844 11:48:32 -- nvmf/common.sh@520 -- # config=() 00:20:59.844 11:48:32 -- nvmf/common.sh@520 -- # local subsystem config 00:20:59.844 11:48:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:59.844 11:48:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:59.844 { 00:20:59.844 "params": { 00:20:59.844 "name": "Nvme$subsystem", 00:20:59.844 "trtype": "$TEST_TRANSPORT", 00:20:59.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.844 "adrfam": "ipv4", 00:20:59.844 "trsvcid": "$NVMF_PORT", 00:20:59.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.844 "hdgst": ${hdgst:-false}, 00:20:59.844 "ddgst": ${ddgst:-false} 00:20:59.844 }, 00:20:59.844 "method": "bdev_nvme_attach_controller" 00:20:59.844 } 00:20:59.844 EOF 00:20:59.844 )") 00:20:59.844 11:48:32 -- nvmf/common.sh@542 -- # cat 00:20:59.844 11:48:32 -- nvmf/common.sh@544 -- # jq . 00:20:59.844 11:48:32 -- nvmf/common.sh@545 -- # IFS=, 00:20:59.844 11:48:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:59.844 "params": { 00:20:59.844 "name": "Nvme1", 00:20:59.844 "trtype": "tcp", 00:20:59.844 "traddr": "10.0.0.2", 00:20:59.844 "adrfam": "ipv4", 00:20:59.844 "trsvcid": "4420", 00:20:59.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.844 "hdgst": false, 00:20:59.844 "ddgst": false 00:20:59.844 }, 00:20:59.844 "method": "bdev_nvme_attach_controller" 00:20:59.844 }' 00:20:59.844 [2024-11-20 11:48:32.715283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:59.844 [2024-11-20 11:48:32.715426] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77373 ] 00:20:59.844 [2024-11-20 11:48:32.852242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.104 [2024-11-20 11:48:32.935379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.105 [2024-11-20 11:48:32.935589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.105 [2024-11-20 11:48:32.935586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.105 [2024-11-20 11:48:33.088930] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:00.105 [2024-11-20 11:48:33.089070] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:00.105 I/O targets: 00:21:00.105 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:00.105 00:21:00.105 00:21:00.105 CUnit - A unit testing framework for C - Version 2.1-3 00:21:00.105 http://cunit.sourceforge.net/ 00:21:00.105 00:21:00.105 00:21:00.105 Suite: bdevio tests on: Nvme1n1 00:21:00.105 Test: blockdev write read block ...passed 00:21:00.364 Test: blockdev write zeroes read block ...passed 00:21:00.364 Test: blockdev write zeroes read no split ...passed 00:21:00.364 Test: blockdev write zeroes read split ...passed 00:21:00.364 Test: blockdev write zeroes read split partial ...passed 00:21:00.364 Test: blockdev reset ...[2024-11-20 11:48:33.212265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.364 [2024-11-20 11:48:33.212423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ab910 (9): Bad file descriptor 00:21:00.364 [2024-11-20 11:48:33.229756] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:00.364 passed 00:21:00.364 Test: blockdev write read 8 blocks ...passed 00:21:00.364 Test: blockdev write read size > 128k ...passed 00:21:00.364 Test: blockdev write read invalid size ...passed 00:21:00.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:00.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:00.364 Test: blockdev write read max offset ...passed 00:21:00.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:00.364 Test: blockdev writev readv 8 blocks ...passed 00:21:00.364 Test: blockdev writev readv 30 x 1block ...passed 00:21:00.364 Test: blockdev writev readv block ...passed 00:21:00.364 Test: blockdev writev readv size > 128k ...passed 00:21:00.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:00.625 Test: blockdev comparev and writev ...[2024-11-20 11:48:33.406141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.406265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.406283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.406312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.406622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.406631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.406641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.406960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.406969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.406979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.406986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.407302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.407317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.407327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:00.625 [2024-11-20 11:48:33.407333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:00.625 passed 00:21:00.625 Test: blockdev nvme passthru rw ...passed 00:21:00.625 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:48:33.491032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.625 [2024-11-20 11:48:33.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.491172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:21:00.625 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:21:00.625 [2024-11-20 11:48:33.491267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.491423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.625 [2024-11-20 11:48:33.491432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:00.625 [2024-11-20 11:48:33.491528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.625 [2024-11-20 11:48:33.491537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:00.625 passed 00:21:00.625 Test: blockdev copy ...passed 00:21:00.625 00:21:00.625 Run Summary: Type Total Ran Passed Failed Inactive 00:21:00.625 suites 1 1 n/a 0 0 00:21:00.625 tests 23 23 23 0 0 00:21:00.625 asserts 152 152 152 0 n/a 00:21:00.625 00:21:00.625 Elapsed time = 0.919 seconds 00:21:00.885 11:48:33 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.885 11:48:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.885 11:48:33 -- common/autotest_common.sh@10 -- # set +x 00:21:00.885 11:48:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.885 11:48:33 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:00.885 11:48:33 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:00.885 11:48:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:00.885 11:48:33 -- nvmf/common.sh@116 -- # sync 00:21:00.885 11:48:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:00.885 11:48:33 -- nvmf/common.sh@119 -- # set +e 00:21:00.885 11:48:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:00.885 11:48:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:00.885 rmmod nvme_tcp 00:21:00.885 rmmod nvme_fabrics 00:21:00.885 rmmod nvme_keyring 00:21:00.885 11:48:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:00.885 11:48:33 -- nvmf/common.sh@123 -- # set -e 00:21:00.885 11:48:33 -- nvmf/common.sh@124 -- # return 0 00:21:00.885 11:48:33 -- nvmf/common.sh@477 -- # '[' -n 77316 ']' 00:21:00.885 11:48:33 -- nvmf/common.sh@478 -- # killprocess 77316 00:21:00.885 11:48:33 -- common/autotest_common.sh@936 -- # '[' -z 77316 ']' 00:21:00.885 11:48:33 -- common/autotest_common.sh@940 -- # kill -0 77316 00:21:00.885 11:48:33 -- common/autotest_common.sh@941 -- # uname 00:21:00.885 11:48:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.885 11:48:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77316 00:21:00.885 killing process with pid 77316 00:21:00.885 11:48:33 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:00.885 11:48:33 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:00.885 11:48:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77316' 00:21:00.885 11:48:33 -- common/autotest_common.sh@955 -- # kill 77316 00:21:00.885 11:48:33 -- common/autotest_common.sh@960 -- # wait 77316 00:21:01.457 11:48:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:01.457 11:48:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:01.457 11:48:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:01.457 11:48:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.457 11:48:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:01.457 11:48:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.457 11:48:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.457 11:48:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.457 11:48:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:01.457 00:21:01.457 real 0m3.359s 00:21:01.457 user 0m11.111s 00:21:01.457 sys 0m0.903s 00:21:01.457 11:48:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:01.457 ************************************ 00:21:01.457 END TEST nvmf_bdevio 00:21:01.457 ************************************ 00:21:01.457 11:48:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.457 11:48:34 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:01.457 11:48:34 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:01.457 11:48:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:01.457 11:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.457 11:48:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.457 ************************************ 00:21:01.457 START TEST nvmf_bdevio_no_huge 00:21:01.457 ************************************ 00:21:01.457 11:48:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:01.717 * Looking for test storage... 00:21:01.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:01.717 11:48:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:01.717 11:48:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:01.717 11:48:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:01.717 11:48:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:01.717 11:48:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:01.717 11:48:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:01.717 11:48:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:01.717 11:48:34 -- scripts/common.sh@335 -- # IFS=.-: 00:21:01.717 11:48:34 -- scripts/common.sh@335 -- # read -ra ver1 00:21:01.717 11:48:34 -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.717 11:48:34 -- scripts/common.sh@336 -- # read -ra ver2 00:21:01.717 11:48:34 -- scripts/common.sh@337 -- # local 'op=<' 00:21:01.717 11:48:34 -- scripts/common.sh@339 -- # ver1_l=2 00:21:01.717 11:48:34 -- scripts/common.sh@340 -- # ver2_l=1 00:21:01.717 11:48:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:01.717 11:48:34 -- scripts/common.sh@343 -- # case "$op" in 00:21:01.717 11:48:34 -- scripts/common.sh@344 -- # : 1 00:21:01.717 11:48:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:01.717 11:48:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.717 11:48:34 -- scripts/common.sh@364 -- # decimal 1 00:21:01.717 11:48:34 -- scripts/common.sh@352 -- # local d=1 00:21:01.717 11:48:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.717 11:48:34 -- scripts/common.sh@354 -- # echo 1 00:21:01.717 11:48:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:01.717 11:48:34 -- scripts/common.sh@365 -- # decimal 2 00:21:01.717 11:48:34 -- scripts/common.sh@352 -- # local d=2 00:21:01.717 11:48:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.717 11:48:34 -- scripts/common.sh@354 -- # echo 2 00:21:01.717 11:48:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:01.717 11:48:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.717 11:48:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:01.717 11:48:34 -- scripts/common.sh@367 -- # return 0 00:21:01.717 11:48:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.717 11:48:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:01.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.717 --rc genhtml_branch_coverage=1 00:21:01.717 --rc genhtml_function_coverage=1 00:21:01.717 --rc genhtml_legend=1 00:21:01.717 --rc geninfo_all_blocks=1 00:21:01.717 --rc geninfo_unexecuted_blocks=1 00:21:01.717 00:21:01.717 ' 00:21:01.717 11:48:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:01.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.717 --rc genhtml_branch_coverage=1 00:21:01.717 --rc genhtml_function_coverage=1 00:21:01.717 --rc genhtml_legend=1 00:21:01.717 --rc geninfo_all_blocks=1 00:21:01.717 --rc geninfo_unexecuted_blocks=1 00:21:01.717 00:21:01.717 ' 00:21:01.717 11:48:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:01.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.717 --rc genhtml_branch_coverage=1 00:21:01.717 --rc genhtml_function_coverage=1 00:21:01.717 --rc genhtml_legend=1 00:21:01.717 --rc geninfo_all_blocks=1 00:21:01.717 --rc geninfo_unexecuted_blocks=1 00:21:01.717 00:21:01.717 ' 00:21:01.717 11:48:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:01.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.717 --rc genhtml_branch_coverage=1 00:21:01.717 --rc genhtml_function_coverage=1 00:21:01.717 --rc genhtml_legend=1 00:21:01.717 --rc geninfo_all_blocks=1 00:21:01.717 --rc geninfo_unexecuted_blocks=1 00:21:01.717 00:21:01.717 ' 00:21:01.717 11:48:34 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:01.717 11:48:34 -- nvmf/common.sh@7 -- # uname -s 00:21:01.717 11:48:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.717 11:48:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.717 11:48:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.717 11:48:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.717 11:48:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.717 11:48:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.718 11:48:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.718 11:48:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.718 11:48:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.718 11:48:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.718 11:48:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:21:01.718 11:48:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:21:01.718 11:48:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.718 11:48:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.718 11:48:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:01.718 11:48:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.718 11:48:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.718 11:48:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.718 11:48:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.718 11:48:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:48:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:48:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:48:34 -- paths/export.sh@5 -- # export PATH 00:21:01.718 11:48:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:48:34 -- nvmf/common.sh@46 -- # : 0 00:21:01.718 11:48:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:01.718 11:48:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:01.718 11:48:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:01.718 11:48:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.718 11:48:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.718 11:48:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:01.718 11:48:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:01.718 11:48:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:01.718 11:48:34 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:01.718 11:48:34 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:01.718 11:48:34 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:01.718 11:48:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:01.718 11:48:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.718 11:48:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:01.718 11:48:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:01.718 11:48:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:01.718 11:48:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.718 11:48:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.718 11:48:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.718 11:48:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:01.718 11:48:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:01.718 11:48:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:01.718 11:48:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:01.718 11:48:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:01.718 11:48:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:01.718 11:48:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.718 11:48:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.718 11:48:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:01.718 11:48:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:01.718 11:48:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:01.718 11:48:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:01.718 11:48:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:01.718 11:48:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.718 11:48:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:01.718 11:48:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:01.718 11:48:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:01.718 11:48:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:01.718 11:48:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:01.718 11:48:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:01.718 Cannot find device "nvmf_tgt_br" 00:21:01.718 11:48:34 -- nvmf/common.sh@154 -- # true 00:21:01.718 11:48:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.718 Cannot find device "nvmf_tgt_br2" 00:21:01.718 11:48:34 -- nvmf/common.sh@155 -- # true 00:21:01.718 11:48:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:01.718 11:48:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:01.978 Cannot find device "nvmf_tgt_br" 00:21:01.979 11:48:34 -- nvmf/common.sh@157 -- # true 00:21:01.979 11:48:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:01.979 Cannot find device "nvmf_tgt_br2" 00:21:01.979 11:48:34 -- nvmf/common.sh@158 -- # true 00:21:01.979 11:48:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:01.979 11:48:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:01.979 11:48:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:01.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.979 11:48:34 -- nvmf/common.sh@161 -- # true 00:21:01.979 11:48:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:01.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.979 11:48:34 -- nvmf/common.sh@162 -- # true 00:21:01.979 11:48:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:01.979 11:48:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:01.979 11:48:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:01.979 11:48:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:01.979 11:48:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:01.979 11:48:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:01.979 11:48:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:01.979 11:48:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:01.979 11:48:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:01.979 11:48:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:01.979 11:48:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:01.979 11:48:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:01.979 11:48:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:01.979 11:48:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:01.979 11:48:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:01.979 11:48:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:01.979 11:48:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:01.979 11:48:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:01.979 11:48:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:01.979 11:48:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.239 11:48:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.239 11:48:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.239 11:48:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.239 11:48:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:02.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:21:02.239 00:21:02.239 --- 10.0.0.2 ping statistics --- 00:21:02.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.239 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:02.239 11:48:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:02.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:21:02.239 00:21:02.239 --- 10.0.0.3 ping statistics --- 00:21:02.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.239 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:02.239 11:48:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:02.239 00:21:02.239 --- 10.0.0.1 ping statistics --- 00:21:02.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.239 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:02.239 11:48:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.239 11:48:35 -- nvmf/common.sh@421 -- # return 0 00:21:02.239 11:48:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:02.239 11:48:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.239 11:48:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:02.239 11:48:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:02.239 11:48:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.239 11:48:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:02.239 11:48:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:02.239 11:48:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:02.239 11:48:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:02.239 11:48:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.239 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:21:02.239 11:48:35 -- nvmf/common.sh@469 -- # nvmfpid=77566 00:21:02.239 11:48:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:02.239 11:48:35 -- nvmf/common.sh@470 -- # waitforlisten 77566 00:21:02.239 11:48:35 -- common/autotest_common.sh@829 -- # '[' -z 77566 ']' 00:21:02.239 11:48:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.239 11:48:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.239 11:48:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.239 11:48:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.239 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:21:02.239 [2024-11-20 11:48:35.150047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:02.239 [2024-11-20 11:48:35.150131] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:02.500 [2024-11-20 11:48:35.281156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.500 [2024-11-20 11:48:35.380931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:02.500 [2024-11-20 11:48:35.381073] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.500 [2024-11-20 11:48:35.381081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.500 [2024-11-20 11:48:35.381087] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.500 [2024-11-20 11:48:35.381180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:02.500 [2024-11-20 11:48:35.381366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:02.500 [2024-11-20 11:48:35.381560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.500 [2024-11-20 11:48:35.381562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:03.069 11:48:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.069 11:48:35 -- common/autotest_common.sh@862 -- # return 0 00:21:03.069 11:48:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:03.069 11:48:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.069 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 11:48:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.069 11:48:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.069 11:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.069 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 [2024-11-20 11:48:36.054450] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.069 11:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.069 11:48:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:03.069 11:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.069 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 Malloc0 00:21:03.069 11:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.069 11:48:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.069 11:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.069 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 11:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.069 11:48:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.069 11:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.069 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 11:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.069 11:48:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.069 11:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.069 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 [2024-11-20 11:48:36.107198] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.329 11:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.329 11:48:36 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:03.329 11:48:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:03.329 11:48:36 -- nvmf/common.sh@520 -- # config=() 00:21:03.329 11:48:36 -- nvmf/common.sh@520 -- # local subsystem config 00:21:03.329 11:48:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:03.329 11:48:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:03.329 { 00:21:03.329 "params": { 00:21:03.329 "name": "Nvme$subsystem", 00:21:03.329 "trtype": "$TEST_TRANSPORT", 00:21:03.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.329 "adrfam": "ipv4", 00:21:03.329 "trsvcid": "$NVMF_PORT", 00:21:03.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.329 "hdgst": ${hdgst:-false}, 00:21:03.329 "ddgst": ${ddgst:-false} 00:21:03.329 }, 00:21:03.329 "method": "bdev_nvme_attach_controller" 00:21:03.329 } 00:21:03.329 EOF 00:21:03.329 )") 00:21:03.329 11:48:36 -- nvmf/common.sh@542 -- # cat 00:21:03.329 11:48:36 -- nvmf/common.sh@544 -- # jq . 00:21:03.329 11:48:36 -- nvmf/common.sh@545 -- # IFS=, 00:21:03.329 11:48:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:03.329 "params": { 00:21:03.329 "name": "Nvme1", 00:21:03.329 "trtype": "tcp", 00:21:03.329 "traddr": "10.0.0.2", 00:21:03.329 "adrfam": "ipv4", 00:21:03.329 "trsvcid": "4420", 00:21:03.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.329 "hdgst": false, 00:21:03.329 "ddgst": false 00:21:03.329 }, 00:21:03.329 "method": "bdev_nvme_attach_controller" 00:21:03.329 }' 00:21:03.329 [2024-11-20 11:48:36.164578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:03.329 [2024-11-20 11:48:36.165049] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77620 ] 00:21:03.329 [2024-11-20 11:48:36.298015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:03.589 [2024-11-20 11:48:36.402964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.589 [2024-11-20 11:48:36.403164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.589 [2024-11-20 11:48:36.403167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.589 [2024-11-20 11:48:36.555606] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:03.589 [2024-11-20 11:48:36.555645] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:03.589 I/O targets: 00:21:03.589 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:03.589 00:21:03.589 00:21:03.589 CUnit - A unit testing framework for C - Version 2.1-3 00:21:03.589 http://cunit.sourceforge.net/ 00:21:03.589 00:21:03.589 00:21:03.589 Suite: bdevio tests on: Nvme1n1 00:21:03.589 Test: blockdev write read block ...passed 00:21:03.849 Test: blockdev write zeroes read block ...passed 00:21:03.849 Test: blockdev write zeroes read no split ...passed 00:21:03.849 Test: blockdev write zeroes read split ...passed 00:21:03.849 Test: blockdev write zeroes read split partial ...passed 00:21:03.849 Test: blockdev reset ...[2024-11-20 11:48:36.698141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.849 [2024-11-20 11:48:36.698241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e81c0 (9): Bad file descriptor 00:21:03.849 [2024-11-20 11:48:36.714272] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:03.849 passed 00:21:03.849 Test: blockdev write read 8 blocks ...passed 00:21:03.849 Test: blockdev write read size > 128k ...passed 00:21:03.849 Test: blockdev write read invalid size ...passed 00:21:03.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:03.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:03.849 Test: blockdev write read max offset ...passed 00:21:03.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:03.849 Test: blockdev writev readv 8 blocks ...passed 00:21:03.849 Test: blockdev writev readv 30 x 1block ...passed 00:21:03.849 Test: blockdev writev readv block ...passed 00:21:03.849 Test: blockdev writev readv size > 128k ...passed 00:21:03.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:03.849 Test: blockdev comparev and writev ...[2024-11-20 11:48:36.887776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.888056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.888173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.888563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.888639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.888718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.888765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.889125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.889186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.889240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.889330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.889677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.889744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:03.849 [2024-11-20 11:48:36.889786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.849 [2024-11-20 11:48:36.889833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:04.109 passed 00:21:04.109 Test: blockdev nvme passthru rw ...passed 00:21:04.109 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:48:36.974039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.109 [2024-11-20 11:48:36.974334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:04.109 [2024-11-20 11:48:36.974528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.109 [2024-11-20 11:48:36.974589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:04.109 [2024-11-20 11:48:36.974760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.109 [2024-11-20 11:48:36.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:04.110 [2024-11-20 11:48:36.974960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.110 [2024-11-20 11:48:36.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:04.110 passed 00:21:04.110 Test: blockdev nvme admin passthru ...passed 00:21:04.110 Test: blockdev copy ...passed 00:21:04.110 00:21:04.110 Run Summary: Type Total Ran Passed Failed Inactive 00:21:04.110 suites 1 1 n/a 0 0 00:21:04.110 tests 23 23 23 0 0 00:21:04.110 asserts 152 152 152 0 n/a 00:21:04.110 00:21:04.110 Elapsed time = 0.961 seconds 00:21:04.368 11:48:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.368 11:48:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.368 11:48:37 -- common/autotest_common.sh@10 -- # set +x 00:21:04.368 11:48:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.368 11:48:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:04.368 11:48:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:04.368 11:48:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:04.368 11:48:37 -- nvmf/common.sh@116 -- # sync 00:21:04.627 11:48:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:04.627 11:48:37 -- nvmf/common.sh@119 -- # set +e 00:21:04.627 11:48:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:04.627 11:48:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:04.627 rmmod nvme_tcp 00:21:04.627 rmmod nvme_fabrics 00:21:04.627 rmmod nvme_keyring 00:21:04.627 11:48:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:04.627 11:48:37 -- nvmf/common.sh@123 -- # set -e 00:21:04.627 11:48:37 -- nvmf/common.sh@124 -- # return 0 00:21:04.627 11:48:37 -- nvmf/common.sh@477 -- # '[' -n 77566 ']' 00:21:04.627 11:48:37 -- nvmf/common.sh@478 -- # killprocess 77566 00:21:04.627 11:48:37 -- common/autotest_common.sh@936 -- # '[' -z 77566 ']' 00:21:04.627 11:48:37 -- common/autotest_common.sh@940 -- # kill -0 77566 00:21:04.627 11:48:37 -- common/autotest_common.sh@941 -- # uname 00:21:04.627 11:48:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.627 11:48:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77566 00:21:04.627 11:48:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:04.627 killing process with pid 77566 00:21:04.627 11:48:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:04.627 11:48:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77566' 00:21:04.627 11:48:37 -- common/autotest_common.sh@955 -- # kill 77566 00:21:04.627 11:48:37 -- common/autotest_common.sh@960 -- # wait 77566 00:21:04.887 11:48:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:04.887 11:48:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:04.887 11:48:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:04.887 11:48:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.887 11:48:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:04.887 11:48:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.887 11:48:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.887 11:48:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.147 11:48:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:05.147 00:21:05.147 real 0m3.559s 00:21:05.147 user 0m11.799s 00:21:05.147 sys 0m1.350s 00:21:05.147 11:48:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:05.147 ************************************ 00:21:05.147 END TEST nvmf_bdevio_no_huge 00:21:05.147 11:48:37 -- common/autotest_common.sh@10 -- # set +x 00:21:05.147 ************************************ 00:21:05.147 11:48:38 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:05.147 11:48:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.147 11:48:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.147 11:48:38 -- common/autotest_common.sh@10 -- # set +x 00:21:05.147 ************************************ 00:21:05.147 START TEST nvmf_tls 00:21:05.147 ************************************ 00:21:05.147 11:48:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:05.147 * Looking for test storage... 00:21:05.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:05.147 11:48:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:05.147 11:48:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:05.147 11:48:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:05.407 11:48:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:05.407 11:48:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:05.407 11:48:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:05.407 11:48:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:05.407 11:48:38 -- scripts/common.sh@335 -- # IFS=.-: 00:21:05.407 11:48:38 -- scripts/common.sh@335 -- # read -ra ver1 00:21:05.407 11:48:38 -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.407 11:48:38 -- scripts/common.sh@336 -- # read -ra ver2 00:21:05.407 11:48:38 -- scripts/common.sh@337 -- # local 'op=<' 00:21:05.407 11:48:38 -- scripts/common.sh@339 -- # ver1_l=2 00:21:05.407 11:48:38 -- scripts/common.sh@340 -- # ver2_l=1 00:21:05.407 11:48:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:05.407 11:48:38 -- scripts/common.sh@343 -- # case "$op" in 00:21:05.407 11:48:38 -- scripts/common.sh@344 -- # : 1 00:21:05.407 11:48:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:05.407 11:48:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.407 11:48:38 -- scripts/common.sh@364 -- # decimal 1 00:21:05.407 11:48:38 -- scripts/common.sh@352 -- # local d=1 00:21:05.407 11:48:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.407 11:48:38 -- scripts/common.sh@354 -- # echo 1 00:21:05.407 11:48:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:05.407 11:48:38 -- scripts/common.sh@365 -- # decimal 2 00:21:05.407 11:48:38 -- scripts/common.sh@352 -- # local d=2 00:21:05.407 11:48:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.407 11:48:38 -- scripts/common.sh@354 -- # echo 2 00:21:05.408 11:48:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:05.408 11:48:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:05.408 11:48:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:05.408 11:48:38 -- scripts/common.sh@367 -- # return 0 00:21:05.408 11:48:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.408 11:48:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.408 --rc genhtml_branch_coverage=1 00:21:05.408 --rc genhtml_function_coverage=1 00:21:05.408 --rc genhtml_legend=1 00:21:05.408 --rc geninfo_all_blocks=1 00:21:05.408 --rc geninfo_unexecuted_blocks=1 00:21:05.408 00:21:05.408 ' 00:21:05.408 11:48:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.408 --rc genhtml_branch_coverage=1 00:21:05.408 --rc genhtml_function_coverage=1 00:21:05.408 --rc genhtml_legend=1 00:21:05.408 --rc geninfo_all_blocks=1 00:21:05.408 --rc geninfo_unexecuted_blocks=1 00:21:05.408 00:21:05.408 ' 00:21:05.408 11:48:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.408 --rc genhtml_branch_coverage=1 00:21:05.408 --rc genhtml_function_coverage=1 00:21:05.408 --rc genhtml_legend=1 00:21:05.408 --rc geninfo_all_blocks=1 00:21:05.408 --rc geninfo_unexecuted_blocks=1 00:21:05.408 00:21:05.408 ' 00:21:05.408 11:48:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.408 --rc genhtml_branch_coverage=1 00:21:05.408 --rc genhtml_function_coverage=1 00:21:05.408 --rc genhtml_legend=1 00:21:05.408 --rc geninfo_all_blocks=1 00:21:05.408 --rc geninfo_unexecuted_blocks=1 00:21:05.408 00:21:05.408 ' 00:21:05.408 11:48:38 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:05.408 11:48:38 -- nvmf/common.sh@7 -- # uname -s 00:21:05.408 11:48:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.408 11:48:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.408 11:48:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.408 11:48:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.408 11:48:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.408 11:48:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.408 11:48:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.408 11:48:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.408 11:48:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.408 11:48:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.408 11:48:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:21:05.408 11:48:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:21:05.408 11:48:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.408 11:48:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.408 11:48:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:05.408 11:48:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.408 11:48:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.408 11:48:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.408 11:48:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.408 11:48:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.408 11:48:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.408 11:48:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.408 11:48:38 -- paths/export.sh@5 -- # export PATH 00:21:05.408 11:48:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.408 11:48:38 -- nvmf/common.sh@46 -- # : 0 00:21:05.408 11:48:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:05.408 11:48:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:05.408 11:48:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:05.408 11:48:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.408 11:48:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.408 11:48:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:05.408 11:48:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:05.408 11:48:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:05.408 11:48:38 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:05.408 11:48:38 -- target/tls.sh@71 -- # nvmftestinit 00:21:05.408 11:48:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:05.408 11:48:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.408 11:48:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:05.408 11:48:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:05.408 11:48:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:05.408 11:48:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.408 11:48:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.408 11:48:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.408 11:48:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:05.408 11:48:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:05.408 11:48:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:05.408 11:48:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:05.408 11:48:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:05.408 11:48:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:05.408 11:48:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.408 11:48:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.408 11:48:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:05.408 11:48:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:05.408 11:48:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:05.408 11:48:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:05.408 11:48:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:05.408 11:48:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.408 11:48:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:05.408 11:48:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:05.408 11:48:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:05.408 11:48:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:05.408 11:48:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:05.408 11:48:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:05.408 Cannot find device "nvmf_tgt_br" 00:21:05.408 11:48:38 -- nvmf/common.sh@154 -- # true 00:21:05.408 11:48:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.408 Cannot find device "nvmf_tgt_br2" 00:21:05.408 11:48:38 -- nvmf/common.sh@155 -- # true 00:21:05.408 11:48:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:05.408 11:48:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:05.408 Cannot find device "nvmf_tgt_br" 00:21:05.408 11:48:38 -- nvmf/common.sh@157 -- # true 00:21:05.408 11:48:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:05.408 Cannot find device "nvmf_tgt_br2" 00:21:05.408 11:48:38 -- nvmf/common.sh@158 -- # true 00:21:05.408 11:48:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:05.408 11:48:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:05.669 11:48:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.669 11:48:38 -- nvmf/common.sh@161 -- # true 00:21:05.669 11:48:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.669 11:48:38 -- nvmf/common.sh@162 -- # true 00:21:05.669 11:48:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:05.669 11:48:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:05.669 11:48:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:05.669 11:48:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:05.669 11:48:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:05.669 11:48:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:05.669 11:48:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:05.669 11:48:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:05.669 11:48:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:05.669 11:48:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:05.669 11:48:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:05.669 11:48:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:05.669 11:48:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:05.669 11:48:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:05.669 11:48:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:05.669 11:48:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:05.669 11:48:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:05.669 11:48:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:05.669 11:48:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:05.669 11:48:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:05.669 11:48:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:05.669 11:48:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:05.669 11:48:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:05.669 11:48:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:05.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:21:05.669 00:21:05.669 --- 10.0.0.2 ping statistics --- 00:21:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.669 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:05.669 11:48:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:05.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:05.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:21:05.669 00:21:05.669 --- 10.0.0.3 ping statistics --- 00:21:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.669 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:05.669 11:48:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:05.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:05.669 00:21:05.669 --- 10.0.0.1 ping statistics --- 00:21:05.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.669 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:05.669 11:48:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.669 11:48:38 -- nvmf/common.sh@421 -- # return 0 00:21:05.669 11:48:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:05.669 11:48:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.669 11:48:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:05.669 11:48:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:05.669 11:48:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.669 11:48:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:05.669 11:48:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:05.669 11:48:38 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:05.669 11:48:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:05.669 11:48:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.669 11:48:38 -- common/autotest_common.sh@10 -- # set +x 00:21:05.669 11:48:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:05.669 11:48:38 -- nvmf/common.sh@469 -- # nvmfpid=77809 00:21:05.669 11:48:38 -- nvmf/common.sh@470 -- # waitforlisten 77809 00:21:05.669 11:48:38 -- common/autotest_common.sh@829 -- # '[' -z 77809 ']' 00:21:05.669 11:48:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.669 11:48:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.670 11:48:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.670 11:48:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.670 11:48:38 -- common/autotest_common.sh@10 -- # set +x 00:21:05.938 [2024-11-20 11:48:38.761124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:05.938 [2024-11-20 11:48:38.761201] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.938 [2024-11-20 11:48:38.899564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.206 [2024-11-20 11:48:38.979971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:06.206 [2024-11-20 11:48:38.980292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.206 [2024-11-20 11:48:38.980373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.206 [2024-11-20 11:48:38.980423] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.206 [2024-11-20 11:48:38.980488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.776 11:48:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.776 11:48:39 -- common/autotest_common.sh@862 -- # return 0 00:21:06.776 11:48:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:06.776 11:48:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.776 11:48:39 -- common/autotest_common.sh@10 -- # set +x 00:21:06.776 11:48:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.776 11:48:39 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:21:06.776 11:48:39 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:07.036 true 00:21:07.036 11:48:39 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:07.036 11:48:39 -- target/tls.sh@82 -- # jq -r .tls_version 00:21:07.295 11:48:40 -- target/tls.sh@82 -- # version=0 00:21:07.295 11:48:40 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:21:07.296 11:48:40 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:07.296 11:48:40 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:07.296 11:48:40 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:07.555 11:48:40 -- target/tls.sh@90 -- # version=13 00:21:07.555 11:48:40 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:07.555 11:48:40 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:07.814 11:48:40 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:07.814 11:48:40 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:08.073 11:48:40 -- target/tls.sh@98 -- # version=7 00:21:08.073 11:48:40 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:08.073 11:48:40 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:08.073 11:48:40 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:08.073 11:48:41 -- target/tls.sh@105 -- # ktls=false 00:21:08.073 11:48:41 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:08.073 11:48:41 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:08.334 11:48:41 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:08.334 11:48:41 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:08.593 11:48:41 -- target/tls.sh@113 -- # ktls=true 00:21:08.593 11:48:41 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:08.593 11:48:41 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:08.853 11:48:41 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:08.853 11:48:41 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:09.113 11:48:41 -- target/tls.sh@121 -- # ktls=false 00:21:09.113 11:48:41 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:09.113 11:48:41 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:09.113 11:48:41 -- target/tls.sh@49 -- # local key hash crc 00:21:09.113 11:48:41 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:09.113 11:48:41 -- target/tls.sh@51 -- # hash=01 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # tail -c8 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # gzip -1 -c 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # head -c 4 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # crc='p$H�' 00:21:09.113 11:48:41 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:09.113 11:48:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:09.113 11:48:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:09.113 11:48:41 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:09.113 11:48:41 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:09.113 11:48:41 -- target/tls.sh@49 -- # local key hash crc 00:21:09.113 11:48:41 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:09.113 11:48:41 -- target/tls.sh@51 -- # hash=01 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # gzip -1 -c 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # tail -c8 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # head -c 4 00:21:09.113 11:48:41 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:09.113 11:48:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:09.113 11:48:41 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:09.113 11:48:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:09.113 11:48:41 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:09.113 11:48:41 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:09.113 11:48:41 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:09.113 11:48:41 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:09.113 11:48:41 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:09.113 11:48:41 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:09.113 11:48:41 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:09.113 11:48:41 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:09.373 11:48:42 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:09.632 11:48:42 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:09.632 11:48:42 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:09.633 11:48:42 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.633 [2024-11-20 11:48:42.631169] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.633 11:48:42 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:09.892 11:48:42 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.152 [2024-11-20 11:48:43.042462] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.152 [2024-11-20 11:48:43.042860] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.152 11:48:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.412 malloc0 00:21:10.412 11:48:43 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:10.671 11:48:43 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:10.671 11:48:43 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:22.905 Initializing NVMe Controllers 00:21:22.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.905 Initialization complete. Launching workers. 00:21:22.905 ======================================================== 00:21:22.905 Latency(us) 00:21:22.905 Device Information : IOPS MiB/s Average min max 00:21:22.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15319.96 59.84 4178.03 981.77 17446.15 00:21:22.905 ======================================================== 00:21:22.905 Total : 15319.96 59.84 4178.03 981.77 17446.15 00:21:22.905 00:21:22.905 11:48:53 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:22.905 11:48:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:22.905 11:48:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:22.905 11:48:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:22.905 11:48:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:21:22.905 11:48:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.905 11:48:53 -- target/tls.sh@28 -- # bdevperf_pid=78172 00:21:22.905 11:48:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.905 11:48:53 -- target/tls.sh@31 -- # waitforlisten 78172 /var/tmp/bdevperf.sock 00:21:22.905 11:48:53 -- common/autotest_common.sh@829 -- # '[' -z 78172 ']' 00:21:22.905 11:48:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.905 11:48:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.905 11:48:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.905 11:48:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.905 11:48:53 -- common/autotest_common.sh@10 -- # set +x 00:21:22.905 11:48:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.905 [2024-11-20 11:48:53.880508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:22.905 [2024-11-20 11:48:53.880580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78172 ] 00:21:22.905 [2024-11-20 11:48:54.005515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.905 [2024-11-20 11:48:54.139669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.905 11:48:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.905 11:48:54 -- common/autotest_common.sh@862 -- # return 0 00:21:22.905 11:48:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:22.905 [2024-11-20 11:48:54.891049] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.905 TLSTESTn1 00:21:22.905 11:48:54 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.905 Running I/O for 10 seconds... 00:21:32.892 00:21:32.892 Latency(us) 00:21:32.892 [2024-11-20T11:49:05.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.892 [2024-11-20T11:49:05.935Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:32.892 Verification LBA range: start 0x0 length 0x2000 00:21:32.892 TLSTESTn1 : 10.01 8943.88 34.94 0.00 0.00 14295.02 2160.68 15224.96 00:21:32.892 [2024-11-20T11:49:05.935Z] =================================================================================================================== 00:21:32.892 [2024-11-20T11:49:05.935Z] Total : 8943.88 34.94 0.00 0.00 14295.02 2160.68 15224.96 00:21:32.892 0 00:21:32.892 11:49:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.892 11:49:05 -- target/tls.sh@45 -- # killprocess 78172 00:21:32.892 11:49:05 -- common/autotest_common.sh@936 -- # '[' -z 78172 ']' 00:21:32.892 11:49:05 -- common/autotest_common.sh@940 -- # kill -0 78172 00:21:32.892 11:49:05 -- common/autotest_common.sh@941 -- # uname 00:21:32.892 11:49:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.892 11:49:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78172 00:21:32.892 killing process with pid 78172 00:21:32.892 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.892 00:21:32.892 Latency(us) 00:21:32.892 [2024-11-20T11:49:05.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.892 [2024-11-20T11:49:05.935Z] =================================================================================================================== 00:21:32.892 [2024-11-20T11:49:05.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.892 11:49:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:32.892 11:49:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:32.892 11:49:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78172' 00:21:32.892 11:49:05 -- common/autotest_common.sh@955 -- # kill 78172 00:21:32.892 11:49:05 -- common/autotest_common.sh@960 -- # wait 78172 00:21:32.892 11:49:05 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:32.892 11:49:05 -- common/autotest_common.sh@650 -- # local es=0 00:21:32.892 11:49:05 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:32.892 11:49:05 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:32.892 11:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.892 11:49:05 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:32.892 11:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.892 11:49:05 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:32.892 11:49:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:32.892 11:49:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:32.892 11:49:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:32.892 11:49:05 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:21:32.892 11:49:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:32.892 11:49:05 -- target/tls.sh@28 -- # bdevperf_pid=78318 00:21:32.892 11:49:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:32.892 11:49:05 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:32.892 11:49:05 -- target/tls.sh@31 -- # waitforlisten 78318 /var/tmp/bdevperf.sock 00:21:32.892 11:49:05 -- common/autotest_common.sh@829 -- # '[' -z 78318 ']' 00:21:32.892 11:49:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.892 11:49:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.892 11:49:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.892 11:49:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.892 11:49:05 -- common/autotest_common.sh@10 -- # set +x 00:21:32.892 [2024-11-20 11:49:05.533000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:32.892 [2024-11-20 11:49:05.533057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78318 ] 00:21:32.892 [2024-11-20 11:49:05.652029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.892 [2024-11-20 11:49:05.792818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.461 11:49:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.461 11:49:06 -- common/autotest_common.sh@862 -- # return 0 00:21:33.461 11:49:06 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:33.720 [2024-11-20 11:49:06.546774] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.720 [2024-11-20 11:49:06.555350] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:33.720 [2024-11-20 11:49:06.556013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df33d0 (107): Transport endpoint is not connected 00:21:33.720 [2024-11-20 11:49:06.556999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df33d0 (9): Bad file descriptor 00:21:33.720 [2024-11-20 11:49:06.557995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.720 [2024-11-20 11:49:06.558011] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:33.720 [2024-11-20 11:49:06.558018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.720 2024/11/20 11:49:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:33.720 request: 00:21:33.720 { 00:21:33.720 "method": "bdev_nvme_attach_controller", 00:21:33.720 "params": { 00:21:33.720 "name": "TLSTEST", 00:21:33.720 "trtype": "tcp", 00:21:33.720 "traddr": "10.0.0.2", 00:21:33.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.720 "adrfam": "ipv4", 00:21:33.720 "trsvcid": "4420", 00:21:33.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.720 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:21:33.720 } 00:21:33.720 } 00:21:33.720 Got JSON-RPC error response 00:21:33.720 GoRPCClient: error on JSON-RPC call 00:21:33.720 11:49:06 -- target/tls.sh@36 -- # killprocess 78318 00:21:33.720 11:49:06 -- common/autotest_common.sh@936 -- # '[' -z 78318 ']' 00:21:33.720 11:49:06 -- common/autotest_common.sh@940 -- # kill -0 78318 00:21:33.720 11:49:06 -- common/autotest_common.sh@941 -- # uname 00:21:33.720 11:49:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.720 11:49:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78318 00:21:33.720 killing process with pid 78318 00:21:33.720 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.720 00:21:33.720 Latency(us) 00:21:33.720 [2024-11-20T11:49:06.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.720 [2024-11-20T11:49:06.763Z] =================================================================================================================== 00:21:33.720 [2024-11-20T11:49:06.763Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:33.720 11:49:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:33.720 11:49:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:33.720 11:49:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78318' 00:21:33.720 11:49:06 -- common/autotest_common.sh@955 -- # kill 78318 00:21:33.720 11:49:06 -- common/autotest_common.sh@960 -- # wait 78318 00:21:33.979 11:49:06 -- target/tls.sh@37 -- # return 1 00:21:33.979 11:49:06 -- common/autotest_common.sh@653 -- # es=1 00:21:33.979 11:49:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:33.979 11:49:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:33.979 11:49:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:33.980 11:49:06 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:33.980 11:49:06 -- common/autotest_common.sh@650 -- # local es=0 00:21:33.980 11:49:06 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:33.980 11:49:06 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:33.980 11:49:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:33.980 11:49:06 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:33.980 11:49:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:33.980 11:49:06 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:33.980 11:49:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:33.980 11:49:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:33.980 11:49:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:33.980 11:49:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:21:33.980 11:49:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.980 11:49:06 -- target/tls.sh@28 -- # bdevperf_pid=78364 00:21:33.980 11:49:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.980 11:49:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.980 11:49:06 -- target/tls.sh@31 -- # waitforlisten 78364 /var/tmp/bdevperf.sock 00:21:33.980 11:49:06 -- common/autotest_common.sh@829 -- # '[' -z 78364 ']' 00:21:33.980 11:49:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.980 11:49:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.980 11:49:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.980 11:49:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.980 11:49:06 -- common/autotest_common.sh@10 -- # set +x 00:21:34.239 [2024-11-20 11:49:07.021423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:34.239 [2024-11-20 11:49:07.021500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78364 ] 00:21:34.239 [2024-11-20 11:49:07.140277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.499 [2024-11-20 11:49:07.281467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.069 11:49:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.069 11:49:07 -- common/autotest_common.sh@862 -- # return 0 00:21:35.069 11:49:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:35.069 [2024-11-20 11:49:08.039449] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.069 [2024-11-20 11:49:08.043880] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:35.069 [2024-11-20 11:49:08.044107] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:35.069 [2024-11-20 11:49:08.044227] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:35.069 [2024-11-20 11:49:08.044593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97d3d0 (107): Transport endpoint is not connected 00:21:35.069 [2024-11-20 11:49:08.045577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97d3d0 (9): Bad file descriptor 00:21:35.069 [2024-11-20 11:49:08.046573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.069 [2024-11-20 11:49:08.046589] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:35.069 [2024-11-20 11:49:08.046596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.069 2024/11/20 11:49:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:35.070 request: 00:21:35.070 { 00:21:35.070 "method": "bdev_nvme_attach_controller", 00:21:35.070 "params": { 00:21:35.070 "name": "TLSTEST", 00:21:35.070 "trtype": "tcp", 00:21:35.070 "traddr": "10.0.0.2", 00:21:35.070 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:35.070 "adrfam": "ipv4", 00:21:35.070 "trsvcid": "4420", 00:21:35.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.070 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:21:35.070 } 00:21:35.070 } 00:21:35.070 Got JSON-RPC error response 00:21:35.070 GoRPCClient: error on JSON-RPC call 00:21:35.070 11:49:08 -- target/tls.sh@36 -- # killprocess 78364 00:21:35.070 11:49:08 -- common/autotest_common.sh@936 -- # '[' -z 78364 ']' 00:21:35.070 11:49:08 -- common/autotest_common.sh@940 -- # kill -0 78364 00:21:35.070 11:49:08 -- common/autotest_common.sh@941 -- # uname 00:21:35.070 11:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.070 11:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78364 00:21:35.070 killing process with pid 78364 00:21:35.070 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.070 00:21:35.070 Latency(us) 00:21:35.070 [2024-11-20T11:49:08.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.070 [2024-11-20T11:49:08.113Z] =================================================================================================================== 00:21:35.070 [2024-11-20T11:49:08.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.070 11:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:35.070 11:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:35.070 11:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78364' 00:21:35.070 11:49:08 -- common/autotest_common.sh@955 -- # kill 78364 00:21:35.070 11:49:08 -- common/autotest_common.sh@960 -- # wait 78364 00:21:35.640 11:49:08 -- target/tls.sh@37 -- # return 1 00:21:35.640 11:49:08 -- common/autotest_common.sh@653 -- # es=1 00:21:35.640 11:49:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:35.640 11:49:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:35.640 11:49:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:35.640 11:49:08 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:35.640 11:49:08 -- common/autotest_common.sh@650 -- # local es=0 00:21:35.640 11:49:08 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:35.640 11:49:08 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:35.640 11:49:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.640 11:49:08 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:35.640 11:49:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.640 11:49:08 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:35.640 11:49:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:35.640 11:49:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:35.640 11:49:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:35.640 11:49:08 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:21:35.640 11:49:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.640 11:49:08 -- target/tls.sh@28 -- # bdevperf_pid=78410 00:21:35.640 11:49:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.640 11:49:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.640 11:49:08 -- target/tls.sh@31 -- # waitforlisten 78410 /var/tmp/bdevperf.sock 00:21:35.640 11:49:08 -- common/autotest_common.sh@829 -- # '[' -z 78410 ']' 00:21:35.640 11:49:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.640 11:49:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.640 11:49:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.640 11:49:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.640 11:49:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.640 [2024-11-20 11:49:08.512181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:35.640 [2024-11-20 11:49:08.512245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78410 ] 00:21:35.640 [2024-11-20 11:49:08.634710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.900 [2024-11-20 11:49:08.771794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.468 11:49:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.468 11:49:09 -- common/autotest_common.sh@862 -- # return 0 00:21:36.468 11:49:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:36.728 [2024-11-20 11:49:09.534208] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.728 [2024-11-20 11:49:09.539845] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:36.728 [2024-11-20 11:49:09.539877] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:36.728 [2024-11-20 11:49:09.539935] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:36.728 [2024-11-20 11:49:09.540394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5a3d0 (107): Transport endpoint is not connected 00:21:36.728 [2024-11-20 11:49:09.541379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5a3d0 (9): Bad file descriptor 00:21:36.728 [2024-11-20 11:49:09.542375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:36.728 [2024-11-20 11:49:09.542392] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:36.728 [2024-11-20 11:49:09.542399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:36.728 2024/11/20 11:49:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:36.728 request: 00:21:36.728 { 00:21:36.728 "method": "bdev_nvme_attach_controller", 00:21:36.728 "params": { 00:21:36.728 "name": "TLSTEST", 00:21:36.728 "trtype": "tcp", 00:21:36.728 "traddr": "10.0.0.2", 00:21:36.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.728 "adrfam": "ipv4", 00:21:36.728 "trsvcid": "4420", 00:21:36.728 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.728 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:21:36.728 } 00:21:36.728 } 00:21:36.728 Got JSON-RPC error response 00:21:36.728 GoRPCClient: error on JSON-RPC call 00:21:36.728 11:49:09 -- target/tls.sh@36 -- # killprocess 78410 00:21:36.728 11:49:09 -- common/autotest_common.sh@936 -- # '[' -z 78410 ']' 00:21:36.728 11:49:09 -- common/autotest_common.sh@940 -- # kill -0 78410 00:21:36.728 11:49:09 -- common/autotest_common.sh@941 -- # uname 00:21:36.728 11:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.728 11:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78410 00:21:36.728 killing process with pid 78410 00:21:36.728 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.728 00:21:36.728 Latency(us) 00:21:36.728 [2024-11-20T11:49:09.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.728 [2024-11-20T11:49:09.771Z] =================================================================================================================== 00:21:36.728 [2024-11-20T11:49:09.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.728 11:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:36.728 11:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:36.728 11:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78410' 00:21:36.728 11:49:09 -- common/autotest_common.sh@955 -- # kill 78410 00:21:36.728 11:49:09 -- common/autotest_common.sh@960 -- # wait 78410 00:21:36.988 11:49:09 -- target/tls.sh@37 -- # return 1 00:21:36.988 11:49:09 -- common/autotest_common.sh@653 -- # es=1 00:21:36.988 11:49:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.988 11:49:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.988 11:49:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.988 11:49:09 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:36.988 11:49:09 -- common/autotest_common.sh@650 -- # local es=0 00:21:36.988 11:49:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:36.988 11:49:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:36.988 11:49:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.988 11:49:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:36.988 11:49:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.988 11:49:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:36.988 11:49:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.988 11:49:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:36.988 11:49:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:36.988 11:49:09 -- target/tls.sh@23 -- # psk= 00:21:36.988 11:49:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.988 11:49:09 -- target/tls.sh@28 -- # bdevperf_pid=78456 00:21:36.988 11:49:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.988 11:49:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.988 11:49:09 -- target/tls.sh@31 -- # waitforlisten 78456 /var/tmp/bdevperf.sock 00:21:36.988 11:49:09 -- common/autotest_common.sh@829 -- # '[' -z 78456 ']' 00:21:36.988 11:49:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.988 11:49:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.988 11:49:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.988 11:49:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.988 11:49:09 -- common/autotest_common.sh@10 -- # set +x 00:21:36.988 [2024-11-20 11:49:10.008232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:36.988 [2024-11-20 11:49:10.008311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78456 ] 00:21:37.252 [2024-11-20 11:49:10.128363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.252 [2024-11-20 11:49:10.269170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.851 11:49:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.851 11:49:10 -- common/autotest_common.sh@862 -- # return 0 00:21:37.852 11:49:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:38.112 [2024-11-20 11:49:11.038381] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:38.112 [2024-11-20 11:49:11.040325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13badc0 (9): Bad file descriptor 00:21:38.112 [2024-11-20 11:49:11.041317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:38.112 [2024-11-20 11:49:11.041333] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:38.112 [2024-11-20 11:49:11.041340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.112 2024/11/20 11:49:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:38.112 request: 00:21:38.112 { 00:21:38.112 "method": "bdev_nvme_attach_controller", 00:21:38.112 "params": { 00:21:38.112 "name": "TLSTEST", 00:21:38.112 "trtype": "tcp", 00:21:38.112 "traddr": "10.0.0.2", 00:21:38.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.112 "adrfam": "ipv4", 00:21:38.112 "trsvcid": "4420", 00:21:38.112 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:21:38.112 } 00:21:38.112 } 00:21:38.112 Got JSON-RPC error response 00:21:38.112 GoRPCClient: error on JSON-RPC call 00:21:38.112 11:49:11 -- target/tls.sh@36 -- # killprocess 78456 00:21:38.112 11:49:11 -- common/autotest_common.sh@936 -- # '[' -z 78456 ']' 00:21:38.112 11:49:11 -- common/autotest_common.sh@940 -- # kill -0 78456 00:21:38.112 11:49:11 -- common/autotest_common.sh@941 -- # uname 00:21:38.112 11:49:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.112 11:49:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78456 00:21:38.112 killing process with pid 78456 00:21:38.112 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.112 00:21:38.112 Latency(us) 00:21:38.112 [2024-11-20T11:49:11.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.112 [2024-11-20T11:49:11.155Z] =================================================================================================================== 00:21:38.112 [2024-11-20T11:49:11.155Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.112 11:49:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:38.112 11:49:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:38.112 11:49:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78456' 00:21:38.112 11:49:11 -- common/autotest_common.sh@955 -- # kill 78456 00:21:38.112 11:49:11 -- common/autotest_common.sh@960 -- # wait 78456 00:21:38.681 11:49:11 -- target/tls.sh@37 -- # return 1 00:21:38.681 11:49:11 -- common/autotest_common.sh@653 -- # es=1 00:21:38.681 11:49:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.681 11:49:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.681 11:49:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.681 11:49:11 -- target/tls.sh@167 -- # killprocess 77809 00:21:38.681 11:49:11 -- common/autotest_common.sh@936 -- # '[' -z 77809 ']' 00:21:38.681 11:49:11 -- common/autotest_common.sh@940 -- # kill -0 77809 00:21:38.681 11:49:11 -- common/autotest_common.sh@941 -- # uname 00:21:38.681 11:49:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.681 11:49:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77809 00:21:38.681 killing process with pid 77809 00:21:38.681 11:49:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:38.681 11:49:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:38.681 11:49:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77809' 00:21:38.681 11:49:11 -- common/autotest_common.sh@955 -- # kill 77809 00:21:38.681 11:49:11 -- common/autotest_common.sh@960 -- # wait 77809 00:21:38.681 11:49:11 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:38.681 11:49:11 -- target/tls.sh@49 -- # local key hash crc 00:21:38.681 11:49:11 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:38.681 11:49:11 -- target/tls.sh@51 -- # hash=02 00:21:38.943 11:49:11 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:38.943 11:49:11 -- target/tls.sh@52 -- # gzip -1 -c 00:21:38.943 11:49:11 -- target/tls.sh@52 -- # tail -c8 00:21:38.943 11:49:11 -- target/tls.sh@52 -- # head -c 4 00:21:38.943 11:49:11 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:38.943 11:49:11 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:38.943 11:49:11 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:38.943 11:49:11 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:38.943 11:49:11 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:38.943 11:49:11 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:38.943 11:49:11 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:38.943 11:49:11 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:38.943 11:49:11 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:38.943 11:49:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:38.943 11:49:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.943 11:49:11 -- common/autotest_common.sh@10 -- # set +x 00:21:38.943 11:49:11 -- nvmf/common.sh@469 -- # nvmfpid=78516 00:21:38.943 11:49:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.943 11:49:11 -- nvmf/common.sh@470 -- # waitforlisten 78516 00:21:38.943 11:49:11 -- common/autotest_common.sh@829 -- # '[' -z 78516 ']' 00:21:38.943 11:49:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.943 11:49:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.943 11:49:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.943 11:49:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.943 11:49:11 -- common/autotest_common.sh@10 -- # set +x 00:21:38.943 [2024-11-20 11:49:11.811033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:38.943 [2024-11-20 11:49:11.811103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.943 [2024-11-20 11:49:11.949668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.203 [2024-11-20 11:49:12.024477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:39.203 [2024-11-20 11:49:12.024593] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.203 [2024-11-20 11:49:12.024601] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.203 [2024-11-20 11:49:12.024607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.203 [2024-11-20 11:49:12.024624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.773 11:49:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.773 11:49:12 -- common/autotest_common.sh@862 -- # return 0 00:21:39.773 11:49:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:39.773 11:49:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.773 11:49:12 -- common/autotest_common.sh@10 -- # set +x 00:21:39.773 11:49:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.773 11:49:12 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:39.773 11:49:12 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:39.773 11:49:12 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:40.032 [2024-11-20 11:49:12.870869] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.032 11:49:12 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:40.292 11:49:13 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:40.292 [2024-11-20 11:49:13.290135] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.292 [2024-11-20 11:49:13.290311] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.292 11:49:13 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:40.552 malloc0 00:21:40.552 11:49:13 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.812 11:49:13 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:41.072 11:49:13 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:41.072 11:49:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.072 11:49:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:41.072 11:49:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.072 11:49:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:21:41.072 11:49:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.072 11:49:13 -- target/tls.sh@28 -- # bdevperf_pid=78610 00:21:41.072 11:49:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.072 11:49:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.072 11:49:13 -- target/tls.sh@31 -- # waitforlisten 78610 /var/tmp/bdevperf.sock 00:21:41.072 11:49:13 -- common/autotest_common.sh@829 -- # '[' -z 78610 ']' 00:21:41.072 11:49:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.073 11:49:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.073 11:49:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.073 11:49:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.073 11:49:13 -- common/autotest_common.sh@10 -- # set +x 00:21:41.073 [2024-11-20 11:49:13.975733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:41.073 [2024-11-20 11:49:13.975788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78610 ] 00:21:41.333 [2024-11-20 11:49:14.114249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.333 [2024-11-20 11:49:14.252311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.902 11:49:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.902 11:49:14 -- common/autotest_common.sh@862 -- # return 0 00:21:41.902 11:49:14 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:42.161 [2024-11-20 11:49:15.049816] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.161 TLSTESTn1 00:21:42.161 11:49:15 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:42.420 Running I/O for 10 seconds... 00:21:52.433 00:21:52.433 Latency(us) 00:21:52.433 [2024-11-20T11:49:25.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.433 [2024-11-20T11:49:25.476Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:52.433 Verification LBA range: start 0x0 length 0x2000 00:21:52.433 TLSTESTn1 : 10.01 9554.13 37.32 0.00 0.00 13380.66 1960.36 18888.10 00:21:52.433 [2024-11-20T11:49:25.476Z] =================================================================================================================== 00:21:52.433 [2024-11-20T11:49:25.476Z] Total : 9554.13 37.32 0.00 0.00 13380.66 1960.36 18888.10 00:21:52.433 0 00:21:52.433 11:49:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.433 11:49:25 -- target/tls.sh@45 -- # killprocess 78610 00:21:52.433 11:49:25 -- common/autotest_common.sh@936 -- # '[' -z 78610 ']' 00:21:52.433 11:49:25 -- common/autotest_common.sh@940 -- # kill -0 78610 00:21:52.433 11:49:25 -- common/autotest_common.sh@941 -- # uname 00:21:52.433 11:49:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.433 11:49:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78610 00:21:52.433 killing process with pid 78610 00:21:52.433 11:49:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:52.433 11:49:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:52.433 11:49:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78610' 00:21:52.433 11:49:25 -- common/autotest_common.sh@955 -- # kill 78610 00:21:52.433 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.433 00:21:52.433 Latency(us) 00:21:52.433 [2024-11-20T11:49:25.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.433 [2024-11-20T11:49:25.476Z] =================================================================================================================== 00:21:52.433 [2024-11-20T11:49:25.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.433 11:49:25 -- common/autotest_common.sh@960 -- # wait 78610 00:21:52.694 11:49:25 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:52.694 11:49:25 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:52.694 11:49:25 -- common/autotest_common.sh@650 -- # local es=0 00:21:52.694 11:49:25 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:52.694 11:49:25 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:52.694 11:49:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.694 11:49:25 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:52.694 11:49:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.694 11:49:25 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:52.694 11:49:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:52.694 11:49:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:52.694 11:49:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:52.694 11:49:25 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:21:52.694 11:49:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.694 11:49:25 -- target/tls.sh@28 -- # bdevperf_pid=78767 00:21:52.694 11:49:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.694 11:49:25 -- target/tls.sh@31 -- # waitforlisten 78767 /var/tmp/bdevperf.sock 00:21:52.694 11:49:25 -- common/autotest_common.sh@829 -- # '[' -z 78767 ']' 00:21:52.694 11:49:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.694 11:49:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.694 11:49:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.694 11:49:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.694 11:49:25 -- common/autotest_common.sh@10 -- # set +x 00:21:52.694 11:49:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.694 [2024-11-20 11:49:25.561247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:52.694 [2024-11-20 11:49:25.561315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ] 00:21:52.694 [2024-11-20 11:49:25.679696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.954 [2024-11-20 11:49:25.766967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.524 11:49:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.524 11:49:26 -- common/autotest_common.sh@862 -- # return 0 00:21:53.524 11:49:26 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:53.524 [2024-11-20 11:49:26.557392] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.524 [2024-11-20 11:49:26.557443] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:53.524 2024/11/20 11:49:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:53.524 request: 00:21:53.524 { 00:21:53.524 "method": "bdev_nvme_attach_controller", 00:21:53.524 "params": { 00:21:53.524 "name": "TLSTEST", 00:21:53.524 "trtype": "tcp", 00:21:53.524 "traddr": "10.0.0.2", 00:21:53.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.524 "adrfam": "ipv4", 00:21:53.524 "trsvcid": "4420", 00:21:53.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.524 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:21:53.524 } 00:21:53.524 } 00:21:53.524 Got JSON-RPC error response 00:21:53.524 GoRPCClient: error on JSON-RPC call 00:21:53.786 11:49:26 -- target/tls.sh@36 -- # killprocess 78767 00:21:53.786 11:49:26 -- common/autotest_common.sh@936 -- # '[' -z 78767 ']' 00:21:53.786 11:49:26 -- common/autotest_common.sh@940 -- # kill -0 78767 00:21:53.786 11:49:26 -- common/autotest_common.sh@941 -- # uname 00:21:53.786 11:49:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.786 11:49:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78767 00:21:53.786 killing process with pid 78767 00:21:53.786 11:49:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:53.786 11:49:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:53.786 11:49:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78767' 00:21:53.786 Received shutdown signal, test time was about 10.000000 seconds 00:21:53.786 00:21:53.786 Latency(us) 00:21:53.786 [2024-11-20T11:49:26.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.786 [2024-11-20T11:49:26.829Z] =================================================================================================================== 00:21:53.786 [2024-11-20T11:49:26.829Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:53.786 11:49:26 -- common/autotest_common.sh@955 -- # kill 78767 00:21:53.786 11:49:26 -- common/autotest_common.sh@960 -- # wait 78767 00:21:54.046 11:49:26 -- target/tls.sh@37 -- # return 1 00:21:54.046 11:49:26 -- common/autotest_common.sh@653 -- # es=1 00:21:54.046 11:49:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:54.046 11:49:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:54.046 11:49:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:54.046 11:49:26 -- target/tls.sh@183 -- # killprocess 78516 00:21:54.046 11:49:26 -- common/autotest_common.sh@936 -- # '[' -z 78516 ']' 00:21:54.046 11:49:26 -- common/autotest_common.sh@940 -- # kill -0 78516 00:21:54.046 11:49:26 -- common/autotest_common.sh@941 -- # uname 00:21:54.046 11:49:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.046 11:49:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78516 00:21:54.046 11:49:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:54.046 11:49:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:54.046 11:49:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78516' 00:21:54.046 killing process with pid 78516 00:21:54.046 11:49:26 -- common/autotest_common.sh@955 -- # kill 78516 00:21:54.046 11:49:26 -- common/autotest_common.sh@960 -- # wait 78516 00:21:54.307 11:49:27 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:54.307 11:49:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:54.307 11:49:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:54.307 11:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:54.307 11:49:27 -- nvmf/common.sh@469 -- # nvmfpid=78813 00:21:54.307 11:49:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:54.307 11:49:27 -- nvmf/common.sh@470 -- # waitforlisten 78813 00:21:54.307 11:49:27 -- common/autotest_common.sh@829 -- # '[' -z 78813 ']' 00:21:54.307 11:49:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.307 11:49:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.307 11:49:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.307 11:49:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.307 11:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:54.307 [2024-11-20 11:49:27.135938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:54.307 [2024-11-20 11:49:27.135998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.307 [2024-11-20 11:49:27.259780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.307 [2024-11-20 11:49:27.345151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:54.307 [2024-11-20 11:49:27.345275] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.307 [2024-11-20 11:49:27.345282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.307 [2024-11-20 11:49:27.345288] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.307 [2024-11-20 11:49:27.345307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.246 11:49:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.246 11:49:28 -- common/autotest_common.sh@862 -- # return 0 00:21:55.246 11:49:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:55.246 11:49:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.246 11:49:28 -- common/autotest_common.sh@10 -- # set +x 00:21:55.246 11:49:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.246 11:49:28 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:55.246 11:49:28 -- common/autotest_common.sh@650 -- # local es=0 00:21:55.246 11:49:28 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:55.246 11:49:28 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:55.246 11:49:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.246 11:49:28 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:55.246 11:49:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.246 11:49:28 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:55.246 11:49:28 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:55.246 11:49:28 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:55.246 [2024-11-20 11:49:28.240175] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.246 11:49:28 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:55.506 11:49:28 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:55.767 [2024-11-20 11:49:28.627495] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.767 [2024-11-20 11:49:28.627701] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.767 11:49:28 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.027 malloc0 00:21:56.027 11:49:28 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:56.287 11:49:29 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:56.287 [2024-11-20 11:49:29.238645] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:56.287 [2024-11-20 11:49:29.238690] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:56.287 [2024-11-20 11:49:29.238704] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:56.287 2024/11/20 11:49:29 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:21:56.287 request: 00:21:56.287 { 00:21:56.287 "method": "nvmf_subsystem_add_host", 00:21:56.287 "params": { 00:21:56.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.287 "host": "nqn.2016-06.io.spdk:host1", 00:21:56.287 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:21:56.287 } 00:21:56.287 } 00:21:56.287 Got JSON-RPC error response 00:21:56.287 GoRPCClient: error on JSON-RPC call 00:21:56.287 11:49:29 -- common/autotest_common.sh@653 -- # es=1 00:21:56.287 11:49:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.287 11:49:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.287 11:49:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.287 11:49:29 -- target/tls.sh@189 -- # killprocess 78813 00:21:56.287 11:49:29 -- common/autotest_common.sh@936 -- # '[' -z 78813 ']' 00:21:56.287 11:49:29 -- common/autotest_common.sh@940 -- # kill -0 78813 00:21:56.287 11:49:29 -- common/autotest_common.sh@941 -- # uname 00:21:56.287 11:49:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.287 11:49:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78813 00:21:56.287 11:49:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:56.287 11:49:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:56.287 killing process with pid 78813 00:21:56.287 11:49:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78813' 00:21:56.287 11:49:29 -- common/autotest_common.sh@955 -- # kill 78813 00:21:56.287 11:49:29 -- common/autotest_common.sh@960 -- # wait 78813 00:21:56.547 11:49:29 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:56.547 11:49:29 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:21:56.547 11:49:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:56.547 11:49:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.548 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:21:56.548 11:49:29 -- nvmf/common.sh@469 -- # nvmfpid=78925 00:21:56.548 11:49:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.548 11:49:29 -- nvmf/common.sh@470 -- # waitforlisten 78925 00:21:56.548 11:49:29 -- common/autotest_common.sh@829 -- # '[' -z 78925 ']' 00:21:56.548 11:49:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.548 11:49:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.548 11:49:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.548 11:49:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.548 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:21:56.808 [2024-11-20 11:49:29.592918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:56.808 [2024-11-20 11:49:29.592987] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.808 [2024-11-20 11:49:29.711770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.808 [2024-11-20 11:49:29.791882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.808 [2024-11-20 11:49:29.791996] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.808 [2024-11-20 11:49:29.792002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.808 [2024-11-20 11:49:29.792007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.808 [2024-11-20 11:49:29.792029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.748 11:49:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.748 11:49:30 -- common/autotest_common.sh@862 -- # return 0 00:21:57.748 11:49:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:57.748 11:49:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.748 11:49:30 -- common/autotest_common.sh@10 -- # set +x 00:21:57.748 11:49:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.748 11:49:30 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:57.748 11:49:30 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:57.748 11:49:30 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:57.748 [2024-11-20 11:49:30.646913] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.748 11:49:30 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.008 11:49:30 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.268 [2024-11-20 11:49:31.058187] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.268 [2024-11-20 11:49:31.058378] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.268 11:49:31 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:58.268 malloc0 00:21:58.268 11:49:31 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:58.528 11:49:31 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:58.788 11:49:31 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.788 11:49:31 -- target/tls.sh@197 -- # bdevperf_pid=79022 00:21:58.788 11:49:31 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.788 11:49:31 -- target/tls.sh@200 -- # waitforlisten 79022 /var/tmp/bdevperf.sock 00:21:58.788 11:49:31 -- common/autotest_common.sh@829 -- # '[' -z 79022 ']' 00:21:58.788 11:49:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.788 11:49:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.788 11:49:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.788 11:49:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.788 11:49:31 -- common/autotest_common.sh@10 -- # set +x 00:21:58.788 [2024-11-20 11:49:31.630182] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:58.788 [2024-11-20 11:49:31.630232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79022 ] 00:21:58.788 [2024-11-20 11:49:31.767409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.047 [2024-11-20 11:49:31.849261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.618 11:49:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.618 11:49:32 -- common/autotest_common.sh@862 -- # return 0 00:21:59.618 11:49:32 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:59.618 [2024-11-20 11:49:32.644000] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.878 TLSTESTn1 00:21:59.878 11:49:32 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:00.139 11:49:33 -- target/tls.sh@205 -- # tgtconf='{ 00:22:00.139 "subsystems": [ 00:22:00.139 { 00:22:00.139 "subsystem": "iobuf", 00:22:00.139 "config": [ 00:22:00.139 { 00:22:00.139 "method": "iobuf_set_options", 00:22:00.139 "params": { 00:22:00.139 "large_bufsize": 135168, 00:22:00.139 "large_pool_count": 1024, 00:22:00.139 "small_bufsize": 8192, 00:22:00.139 "small_pool_count": 8192 00:22:00.139 } 00:22:00.139 } 00:22:00.139 ] 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "subsystem": "sock", 00:22:00.139 "config": [ 00:22:00.139 { 00:22:00.139 "method": "sock_impl_set_options", 00:22:00.139 "params": { 00:22:00.139 "enable_ktls": false, 00:22:00.139 "enable_placement_id": 0, 00:22:00.139 "enable_quickack": false, 00:22:00.139 "enable_recv_pipe": true, 00:22:00.139 "enable_zerocopy_send_client": false, 00:22:00.139 "enable_zerocopy_send_server": true, 00:22:00.139 "impl_name": "posix", 00:22:00.139 "recv_buf_size": 2097152, 00:22:00.139 "send_buf_size": 2097152, 00:22:00.139 "tls_version": 0, 00:22:00.139 "zerocopy_threshold": 0 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "sock_impl_set_options", 00:22:00.139 "params": { 00:22:00.139 "enable_ktls": false, 00:22:00.139 "enable_placement_id": 0, 00:22:00.139 "enable_quickack": false, 00:22:00.139 "enable_recv_pipe": true, 00:22:00.139 "enable_zerocopy_send_client": false, 00:22:00.139 "enable_zerocopy_send_server": true, 00:22:00.139 "impl_name": "ssl", 00:22:00.139 "recv_buf_size": 4096, 00:22:00.139 "send_buf_size": 4096, 00:22:00.139 "tls_version": 0, 00:22:00.139 "zerocopy_threshold": 0 00:22:00.139 } 00:22:00.139 } 00:22:00.139 ] 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "subsystem": "vmd", 00:22:00.139 "config": [] 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "subsystem": "accel", 00:22:00.139 "config": [ 00:22:00.139 { 00:22:00.139 "method": "accel_set_options", 00:22:00.139 "params": { 00:22:00.139 "buf_count": 2048, 00:22:00.139 "large_cache_size": 16, 00:22:00.139 "sequence_count": 2048, 00:22:00.139 "small_cache_size": 128, 00:22:00.139 "task_count": 2048 00:22:00.139 } 00:22:00.139 } 00:22:00.139 ] 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "subsystem": "bdev", 00:22:00.139 "config": [ 00:22:00.139 { 00:22:00.139 "method": "bdev_set_options", 00:22:00.139 "params": { 00:22:00.139 "bdev_auto_examine": true, 00:22:00.139 "bdev_io_cache_size": 256, 00:22:00.139 "bdev_io_pool_size": 65535, 00:22:00.139 "iobuf_large_cache_size": 16, 00:22:00.139 "iobuf_small_cache_size": 128 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "bdev_raid_set_options", 00:22:00.139 "params": { 00:22:00.139 "process_window_size_kb": 1024 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "bdev_iscsi_set_options", 00:22:00.139 "params": { 00:22:00.139 "timeout_sec": 30 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "bdev_nvme_set_options", 00:22:00.139 "params": { 00:22:00.139 "action_on_timeout": "none", 00:22:00.139 "allow_accel_sequence": false, 00:22:00.139 "arbitration_burst": 0, 00:22:00.139 "bdev_retry_count": 3, 00:22:00.139 "ctrlr_loss_timeout_sec": 0, 00:22:00.139 "delay_cmd_submit": true, 00:22:00.139 "fast_io_fail_timeout_sec": 0, 00:22:00.139 "generate_uuids": false, 00:22:00.139 "high_priority_weight": 0, 00:22:00.139 "io_path_stat": false, 00:22:00.139 "io_queue_requests": 0, 00:22:00.139 "keep_alive_timeout_ms": 10000, 00:22:00.139 "low_priority_weight": 0, 00:22:00.139 "medium_priority_weight": 0, 00:22:00.139 "nvme_adminq_poll_period_us": 10000, 00:22:00.139 "nvme_ioq_poll_period_us": 0, 00:22:00.139 "reconnect_delay_sec": 0, 00:22:00.139 "timeout_admin_us": 0, 00:22:00.139 "timeout_us": 0, 00:22:00.139 "transport_ack_timeout": 0, 00:22:00.139 "transport_retry_count": 4, 00:22:00.139 "transport_tos": 0 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "bdev_nvme_set_hotplug", 00:22:00.139 "params": { 00:22:00.139 "enable": false, 00:22:00.139 "period_us": 100000 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "bdev_malloc_create", 00:22:00.139 "params": { 00:22:00.139 "block_size": 4096, 00:22:00.139 "name": "malloc0", 00:22:00.139 "num_blocks": 8192, 00:22:00.139 "optimal_io_boundary": 0, 00:22:00.139 "physical_block_size": 4096, 00:22:00.139 "uuid": "d038e762-89b3-4198-a27c-d30304a5ef2a" 00:22:00.139 } 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "method": "bdev_wait_for_examine" 00:22:00.139 } 00:22:00.139 ] 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "subsystem": "nbd", 00:22:00.139 "config": [] 00:22:00.139 }, 00:22:00.139 { 00:22:00.139 "subsystem": "scheduler", 00:22:00.139 "config": [ 00:22:00.139 { 00:22:00.139 "method": "framework_set_scheduler", 00:22:00.139 "params": { 00:22:00.139 "name": "static" 00:22:00.140 } 00:22:00.140 } 00:22:00.140 ] 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "subsystem": "nvmf", 00:22:00.140 "config": [ 00:22:00.140 { 00:22:00.140 "method": "nvmf_set_config", 00:22:00.140 "params": { 00:22:00.140 "admin_cmd_passthru": { 00:22:00.140 "identify_ctrlr": false 00:22:00.140 }, 00:22:00.140 "discovery_filter": "match_any" 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_set_max_subsystems", 00:22:00.140 "params": { 00:22:00.140 "max_subsystems": 1024 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_set_crdt", 00:22:00.140 "params": { 00:22:00.140 "crdt1": 0, 00:22:00.140 "crdt2": 0, 00:22:00.140 "crdt3": 0 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_create_transport", 00:22:00.140 "params": { 00:22:00.140 "abort_timeout_sec": 1, 00:22:00.140 "buf_cache_size": 4294967295, 00:22:00.140 "c2h_success": false, 00:22:00.140 "dif_insert_or_strip": false, 00:22:00.140 "in_capsule_data_size": 4096, 00:22:00.140 "io_unit_size": 131072, 00:22:00.140 "max_aq_depth": 128, 00:22:00.140 "max_io_qpairs_per_ctrlr": 127, 00:22:00.140 "max_io_size": 131072, 00:22:00.140 "max_queue_depth": 128, 00:22:00.140 "num_shared_buffers": 511, 00:22:00.140 "sock_priority": 0, 00:22:00.140 "trtype": "TCP", 00:22:00.140 "zcopy": false 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_create_subsystem", 00:22:00.140 "params": { 00:22:00.140 "allow_any_host": false, 00:22:00.140 "ana_reporting": false, 00:22:00.140 "max_cntlid": 65519, 00:22:00.140 "max_namespaces": 10, 00:22:00.140 "min_cntlid": 1, 00:22:00.140 "model_number": "SPDK bdev Controller", 00:22:00.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.140 "serial_number": "SPDK00000000000001" 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_subsystem_add_host", 00:22:00.140 "params": { 00:22:00.140 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.140 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_subsystem_add_ns", 00:22:00.140 "params": { 00:22:00.140 "namespace": { 00:22:00.140 "bdev_name": "malloc0", 00:22:00.140 "nguid": "D038E76289B34198A27CD30304A5EF2A", 00:22:00.140 "nsid": 1, 00:22:00.140 "uuid": "d038e762-89b3-4198-a27c-d30304a5ef2a" 00:22:00.140 }, 00:22:00.140 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:00.140 } 00:22:00.140 }, 00:22:00.140 { 00:22:00.140 "method": "nvmf_subsystem_add_listener", 00:22:00.140 "params": { 00:22:00.140 "listen_address": { 00:22:00.140 "adrfam": "IPv4", 00:22:00.140 "traddr": "10.0.0.2", 00:22:00.140 "trsvcid": "4420", 00:22:00.140 "trtype": "TCP" 00:22:00.140 }, 00:22:00.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.140 "secure_channel": true 00:22:00.140 } 00:22:00.140 } 00:22:00.140 ] 00:22:00.140 } 00:22:00.140 ] 00:22:00.140 }' 00:22:00.140 11:49:33 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:00.401 11:49:33 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:00.401 "subsystems": [ 00:22:00.401 { 00:22:00.401 "subsystem": "iobuf", 00:22:00.401 "config": [ 00:22:00.401 { 00:22:00.401 "method": "iobuf_set_options", 00:22:00.401 "params": { 00:22:00.401 "large_bufsize": 135168, 00:22:00.401 "large_pool_count": 1024, 00:22:00.401 "small_bufsize": 8192, 00:22:00.401 "small_pool_count": 8192 00:22:00.401 } 00:22:00.401 } 00:22:00.401 ] 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "subsystem": "sock", 00:22:00.401 "config": [ 00:22:00.401 { 00:22:00.401 "method": "sock_impl_set_options", 00:22:00.401 "params": { 00:22:00.401 "enable_ktls": false, 00:22:00.401 "enable_placement_id": 0, 00:22:00.401 "enable_quickack": false, 00:22:00.401 "enable_recv_pipe": true, 00:22:00.401 "enable_zerocopy_send_client": false, 00:22:00.401 "enable_zerocopy_send_server": true, 00:22:00.401 "impl_name": "posix", 00:22:00.401 "recv_buf_size": 2097152, 00:22:00.401 "send_buf_size": 2097152, 00:22:00.401 "tls_version": 0, 00:22:00.401 "zerocopy_threshold": 0 00:22:00.401 } 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "method": "sock_impl_set_options", 00:22:00.401 "params": { 00:22:00.401 "enable_ktls": false, 00:22:00.401 "enable_placement_id": 0, 00:22:00.401 "enable_quickack": false, 00:22:00.401 "enable_recv_pipe": true, 00:22:00.401 "enable_zerocopy_send_client": false, 00:22:00.401 "enable_zerocopy_send_server": true, 00:22:00.401 "impl_name": "ssl", 00:22:00.401 "recv_buf_size": 4096, 00:22:00.401 "send_buf_size": 4096, 00:22:00.401 "tls_version": 0, 00:22:00.401 "zerocopy_threshold": 0 00:22:00.401 } 00:22:00.401 } 00:22:00.401 ] 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "subsystem": "vmd", 00:22:00.401 "config": [] 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "subsystem": "accel", 00:22:00.401 "config": [ 00:22:00.401 { 00:22:00.401 "method": "accel_set_options", 00:22:00.401 "params": { 00:22:00.401 "buf_count": 2048, 00:22:00.401 "large_cache_size": 16, 00:22:00.401 "sequence_count": 2048, 00:22:00.401 "small_cache_size": 128, 00:22:00.401 "task_count": 2048 00:22:00.401 } 00:22:00.401 } 00:22:00.401 ] 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "subsystem": "bdev", 00:22:00.401 "config": [ 00:22:00.401 { 00:22:00.401 "method": "bdev_set_options", 00:22:00.401 "params": { 00:22:00.401 "bdev_auto_examine": true, 00:22:00.401 "bdev_io_cache_size": 256, 00:22:00.401 "bdev_io_pool_size": 65535, 00:22:00.401 "iobuf_large_cache_size": 16, 00:22:00.401 "iobuf_small_cache_size": 128 00:22:00.401 } 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "method": "bdev_raid_set_options", 00:22:00.401 "params": { 00:22:00.401 "process_window_size_kb": 1024 00:22:00.401 } 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "method": "bdev_iscsi_set_options", 00:22:00.401 "params": { 00:22:00.401 "timeout_sec": 30 00:22:00.401 } 00:22:00.401 }, 00:22:00.401 { 00:22:00.401 "method": "bdev_nvme_set_options", 00:22:00.401 "params": { 00:22:00.401 "action_on_timeout": "none", 00:22:00.402 "allow_accel_sequence": false, 00:22:00.402 "arbitration_burst": 0, 00:22:00.402 "bdev_retry_count": 3, 00:22:00.402 "ctrlr_loss_timeout_sec": 0, 00:22:00.402 "delay_cmd_submit": true, 00:22:00.402 "fast_io_fail_timeout_sec": 0, 00:22:00.402 "generate_uuids": false, 00:22:00.402 "high_priority_weight": 0, 00:22:00.402 "io_path_stat": false, 00:22:00.402 "io_queue_requests": 512, 00:22:00.402 "keep_alive_timeout_ms": 10000, 00:22:00.402 "low_priority_weight": 0, 00:22:00.402 "medium_priority_weight": 0, 00:22:00.402 "nvme_adminq_poll_period_us": 10000, 00:22:00.402 "nvme_ioq_poll_period_us": 0, 00:22:00.402 "reconnect_delay_sec": 0, 00:22:00.402 "timeout_admin_us": 0, 00:22:00.402 "timeout_us": 0, 00:22:00.402 "transport_ack_timeout": 0, 00:22:00.402 "transport_retry_count": 4, 00:22:00.402 "transport_tos": 0 00:22:00.402 } 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "method": "bdev_nvme_attach_controller", 00:22:00.402 "params": { 00:22:00.402 "adrfam": "IPv4", 00:22:00.402 "ctrlr_loss_timeout_sec": 0, 00:22:00.402 "ddgst": false, 00:22:00.402 "fast_io_fail_timeout_sec": 0, 00:22:00.402 "hdgst": false, 00:22:00.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.402 "name": "TLSTEST", 00:22:00.402 "prchk_guard": false, 00:22:00.402 "prchk_reftag": false, 00:22:00.402 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:22:00.402 "reconnect_delay_sec": 0, 00:22:00.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.402 "traddr": "10.0.0.2", 00:22:00.402 "trsvcid": "4420", 00:22:00.402 "trtype": "TCP" 00:22:00.402 } 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "method": "bdev_nvme_set_hotplug", 00:22:00.402 "params": { 00:22:00.402 "enable": false, 00:22:00.402 "period_us": 100000 00:22:00.402 } 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "method": "bdev_wait_for_examine" 00:22:00.402 } 00:22:00.402 ] 00:22:00.402 }, 00:22:00.402 { 00:22:00.402 "subsystem": "nbd", 00:22:00.402 "config": [] 00:22:00.402 } 00:22:00.402 ] 00:22:00.402 }' 00:22:00.402 11:49:33 -- target/tls.sh@208 -- # killprocess 79022 00:22:00.402 11:49:33 -- common/autotest_common.sh@936 -- # '[' -z 79022 ']' 00:22:00.402 11:49:33 -- common/autotest_common.sh@940 -- # kill -0 79022 00:22:00.402 11:49:33 -- common/autotest_common.sh@941 -- # uname 00:22:00.402 11:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.402 11:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79022 00:22:00.402 11:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:00.402 11:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:00.402 11:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79022' 00:22:00.402 killing process with pid 79022 00:22:00.402 11:49:33 -- common/autotest_common.sh@955 -- # kill 79022 00:22:00.402 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.402 00:22:00.402 Latency(us) 00:22:00.402 [2024-11-20T11:49:33.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.402 [2024-11-20T11:49:33.445Z] =================================================================================================================== 00:22:00.402 [2024-11-20T11:49:33.445Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.402 11:49:33 -- common/autotest_common.sh@960 -- # wait 79022 00:22:00.662 11:49:33 -- target/tls.sh@209 -- # killprocess 78925 00:22:00.662 11:49:33 -- common/autotest_common.sh@936 -- # '[' -z 78925 ']' 00:22:00.662 11:49:33 -- common/autotest_common.sh@940 -- # kill -0 78925 00:22:00.662 11:49:33 -- common/autotest_common.sh@941 -- # uname 00:22:00.662 11:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.662 11:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78925 00:22:00.662 11:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:00.662 11:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:00.662 killing process with pid 78925 00:22:00.662 11:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78925' 00:22:00.662 11:49:33 -- common/autotest_common.sh@955 -- # kill 78925 00:22:00.662 11:49:33 -- common/autotest_common.sh@960 -- # wait 78925 00:22:00.923 11:49:33 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:00.923 11:49:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.923 11:49:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.923 11:49:33 -- target/tls.sh@212 -- # echo '{ 00:22:00.923 "subsystems": [ 00:22:00.923 { 00:22:00.923 "subsystem": "iobuf", 00:22:00.923 "config": [ 00:22:00.923 { 00:22:00.923 "method": "iobuf_set_options", 00:22:00.923 "params": { 00:22:00.923 "large_bufsize": 135168, 00:22:00.923 "large_pool_count": 1024, 00:22:00.923 "small_bufsize": 8192, 00:22:00.923 "small_pool_count": 8192 00:22:00.923 } 00:22:00.923 } 00:22:00.923 ] 00:22:00.923 }, 00:22:00.923 { 00:22:00.923 "subsystem": "sock", 00:22:00.923 "config": [ 00:22:00.923 { 00:22:00.923 "method": "sock_impl_set_options", 00:22:00.923 "params": { 00:22:00.923 "enable_ktls": false, 00:22:00.923 "enable_placement_id": 0, 00:22:00.923 "enable_quickack": false, 00:22:00.923 "enable_recv_pipe": true, 00:22:00.923 "enable_zerocopy_send_client": false, 00:22:00.923 "enable_zerocopy_send_server": true, 00:22:00.923 "impl_name": "posix", 00:22:00.923 "recv_buf_size": 2097152, 00:22:00.923 "send_buf_size": 2097152, 00:22:00.923 "tls_version": 0, 00:22:00.923 "zerocopy_threshold": 0 00:22:00.923 } 00:22:00.923 }, 00:22:00.923 { 00:22:00.923 "method": "sock_impl_set_options", 00:22:00.923 "params": { 00:22:00.923 "enable_ktls": false, 00:22:00.923 "enable_placement_id": 0, 00:22:00.923 "enable_quickack": false, 00:22:00.923 "enable_recv_pipe": true, 00:22:00.923 "enable_zerocopy_send_client": false, 00:22:00.923 "enable_zerocopy_send_server": true, 00:22:00.923 "impl_name": "ssl", 00:22:00.923 "recv_buf_size": 4096, 00:22:00.923 "send_buf_size": 4096, 00:22:00.923 "tls_version": 0, 00:22:00.923 "zerocopy_threshold": 0 00:22:00.923 } 00:22:00.923 } 00:22:00.923 ] 00:22:00.923 }, 00:22:00.923 { 00:22:00.923 "subsystem": "vmd", 00:22:00.924 "config": [] 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "subsystem": "accel", 00:22:00.924 "config": [ 00:22:00.924 { 00:22:00.924 "method": "accel_set_options", 00:22:00.924 "params": { 00:22:00.924 "buf_count": 2048, 00:22:00.924 "large_cache_size": 16, 00:22:00.924 "sequence_count": 2048, 00:22:00.924 "small_cache_size": 128, 00:22:00.924 "task_count": 2048 00:22:00.924 } 00:22:00.924 } 00:22:00.924 ] 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "subsystem": "bdev", 00:22:00.924 "config": [ 00:22:00.924 { 00:22:00.924 "method": "bdev_set_options", 00:22:00.924 "params": { 00:22:00.924 "bdev_auto_examine": true, 00:22:00.924 "bdev_io_cache_size": 256, 00:22:00.924 "bdev_io_pool_size": 65535, 00:22:00.924 "iobuf_large_cache_size": 16, 00:22:00.924 "iobuf_small_cache_size": 128 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "bdev_raid_set_options", 00:22:00.924 "params": { 00:22:00.924 "process_window_size_kb": 1024 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "bdev_iscsi_set_options", 00:22:00.924 "params": { 00:22:00.924 "timeout_sec": 30 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "bdev_nvme_set_options", 00:22:00.924 "params": { 00:22:00.924 "action_on_timeout": "none", 00:22:00.924 "allow_accel_sequence": false, 00:22:00.924 "arbitration_burst": 0, 00:22:00.924 "bdev_retry_count": 3, 00:22:00.924 "ctrlr_loss_timeout_sec": 0, 00:22:00.924 "delay_cmd_submit": true, 00:22:00.924 "fast_io_fail_timeout_sec": 0, 00:22:00.924 "generate_uuids": false, 00:22:00.924 "high_priority_weight": 0, 00:22:00.924 "io_path_stat": false, 00:22:00.924 "io_queue_requests": 0, 00:22:00.924 "keep_alive_timeout_ms": 10000, 00:22:00.924 "low_priority_weight": 0, 00:22:00.924 "medium_priority_weight": 0, 00:22:00.924 "nvme_adminq_poll_period_us": 10000, 00:22:00.924 "nvme_ioq_poll_period_us": 0, 00:22:00.924 "reconnect_delay_sec": 0, 00:22:00.924 "timeout_admin_us": 0, 00:22:00.924 "timeout_us": 0, 00:22:00.924 "transport_ack_timeout": 0, 00:22:00.924 "transport_retry_count": 4, 00:22:00.924 "transport_tos": 0 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "bdev_nvme_set_hotplug", 00:22:00.924 "params": { 00:22:00.924 "enable": false, 00:22:00.924 "period_us": 100000 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "bdev_malloc_create", 00:22:00.924 "params": { 00:22:00.924 "block_size": 4096, 00:22:00.924 "name": "malloc0", 00:22:00.924 "num_blocks": 8192, 00:22:00.924 "optimal_io_boundary": 0, 00:22:00.924 "physical_block_size": 4096, 00:22:00.924 "uuid": "d038e762-89b3-4198-a27c-d30304a5ef2a" 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "bdev_wait_for_examine" 00:22:00.924 } 00:22:00.924 ] 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "subsystem": "nbd", 00:22:00.924 "config": [] 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "subsystem": "scheduler", 00:22:00.924 "config": [ 00:22:00.924 { 00:22:00.924 "method": "framework_set_scheduler", 00:22:00.924 "params": { 00:22:00.924 "name": "static" 00:22:00.924 } 00:22:00.924 } 00:22:00.924 ] 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "subsystem": "nvmf", 00:22:00.924 "config": [ 00:22:00.924 { 00:22:00.924 "method": "nvmf_set_config", 00:22:00.924 "params": { 00:22:00.924 "admin_cmd_passthru": { 00:22:00.924 "identify_ctrlr": false 00:22:00.924 }, 00:22:00.924 "discovery_filter": "match_any" 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_set_max_subsystems", 00:22:00.924 "params": { 00:22:00.924 "max_subsystems": 1024 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_set_crdt", 00:22:00.924 "params": { 00:22:00.924 "crdt1": 0, 00:22:00.924 "crdt2": 0, 00:22:00.924 "crdt3": 0 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_create_transport", 00:22:00.924 "params": { 00:22:00.924 "abort_timeout_sec": 1, 00:22:00.924 "buf_cache_size": 4294967295, 00:22:00.924 "c2h_success": false, 00:22:00.924 "dif_insert_or_strip": false, 00:22:00.924 "in_capsule_data_size": 4096, 00:22:00.924 "io_unit_size": 131072, 00:22:00.924 "max_aq_depth": 128, 00:22:00.924 "max_io_qpairs_per_ctrlr": 127, 00:22:00.924 "max_io_size": 131072, 00:22:00.924 "max_queue_depth": 128, 00:22:00.924 "num_shared_buffers": 511, 00:22:00.924 "sock_priority": 0, 00:22:00.924 "trtype": "TCP", 00:22:00.924 "zcopy": false 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_create_subsystem", 00:22:00.924 "params": { 00:22:00.924 "allow_any_host": false, 00:22:00.924 "ana_reporting": false, 00:22:00.924 "max_cntlid": 65519, 00:22:00.924 "max_namespaces": 10, 00:22:00.924 "min_cntlid": 1, 00:22:00.924 "model_number": "SPDK bdev Controller", 00:22:00.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.924 "serial_number": "SPDK00000000000001" 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_subsystem_add_host", 00:22:00.924 "params": { 00:22:00.924 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.924 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_subsystem_add_ns", 00:22:00.924 "params": { 00:22:00.924 "namespace": { 00:22:00.924 "bdev_name": "malloc0", 00:22:00.924 "nguid": "D038E76289B34198A27CD30304A5EF2A", 00:22:00.924 "nsid": 1, 00:22:00.924 "uuid": "d038e762-89b3-4198-a27c-d30304a5ef2a" 00:22:00.924 }, 00:22:00.924 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:00.924 } 00:22:00.924 }, 00:22:00.924 { 00:22:00.924 "method": "nvmf_subsystem_add_listener", 00:22:00.924 "params": { 00:22:00.924 "listen_address": { 00:22:00.924 "adrfam": "IPv4", 00:22:00.924 "traddr": "10.0.0.2", 00:22:00.924 "trsvcid": "4420", 00:22:00.924 "trtype": "TCP" 00:22:00.924 }, 00:22:00.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.924 "secure_channel": true 00:22:00.924 } 00:22:00.924 } 00:22:00.924 ] 00:22:00.924 } 00:22:00.924 ] 00:22:00.924 }' 00:22:00.924 11:49:33 -- common/autotest_common.sh@10 -- # set +x 00:22:00.924 11:49:33 -- nvmf/common.sh@469 -- # nvmfpid=79095 00:22:00.924 11:49:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:00.925 11:49:33 -- nvmf/common.sh@470 -- # waitforlisten 79095 00:22:00.925 11:49:33 -- common/autotest_common.sh@829 -- # '[' -z 79095 ']' 00:22:00.925 11:49:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.925 11:49:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.925 11:49:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.925 11:49:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.925 11:49:33 -- common/autotest_common.sh@10 -- # set +x 00:22:00.925 [2024-11-20 11:49:33.869504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:00.925 [2024-11-20 11:49:33.869584] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.185 [2024-11-20 11:49:33.989083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.185 [2024-11-20 11:49:34.061874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:01.185 [2024-11-20 11:49:34.062004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.185 [2024-11-20 11:49:34.062011] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.185 [2024-11-20 11:49:34.062016] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.185 [2024-11-20 11:49:34.062034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.445 [2024-11-20 11:49:34.253675] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.445 [2024-11-20 11:49:34.285567] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.445 [2024-11-20 11:49:34.285747] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.705 11:49:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.705 11:49:34 -- common/autotest_common.sh@862 -- # return 0 00:22:01.705 11:49:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:01.705 11:49:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.705 11:49:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.705 11:49:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.705 11:49:34 -- target/tls.sh@216 -- # bdevperf_pid=79139 00:22:01.705 11:49:34 -- target/tls.sh@217 -- # waitforlisten 79139 /var/tmp/bdevperf.sock 00:22:01.705 11:49:34 -- common/autotest_common.sh@829 -- # '[' -z 79139 ']' 00:22:01.705 11:49:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.705 11:49:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.705 11:49:34 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:01.705 11:49:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.705 11:49:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.705 11:49:34 -- target/tls.sh@213 -- # echo '{ 00:22:01.705 "subsystems": [ 00:22:01.705 { 00:22:01.705 "subsystem": "iobuf", 00:22:01.705 "config": [ 00:22:01.705 { 00:22:01.705 "method": "iobuf_set_options", 00:22:01.705 "params": { 00:22:01.705 "large_bufsize": 135168, 00:22:01.705 "large_pool_count": 1024, 00:22:01.705 "small_bufsize": 8192, 00:22:01.705 "small_pool_count": 8192 00:22:01.705 } 00:22:01.705 } 00:22:01.705 ] 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "subsystem": "sock", 00:22:01.705 "config": [ 00:22:01.705 { 00:22:01.705 "method": "sock_impl_set_options", 00:22:01.705 "params": { 00:22:01.705 "enable_ktls": false, 00:22:01.705 "enable_placement_id": 0, 00:22:01.705 "enable_quickack": false, 00:22:01.705 "enable_recv_pipe": true, 00:22:01.705 "enable_zerocopy_send_client": false, 00:22:01.705 "enable_zerocopy_send_server": true, 00:22:01.705 "impl_name": "posix", 00:22:01.705 "recv_buf_size": 2097152, 00:22:01.705 "send_buf_size": 2097152, 00:22:01.705 "tls_version": 0, 00:22:01.705 "zerocopy_threshold": 0 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "sock_impl_set_options", 00:22:01.705 "params": { 00:22:01.705 "enable_ktls": false, 00:22:01.705 "enable_placement_id": 0, 00:22:01.705 "enable_quickack": false, 00:22:01.705 "enable_recv_pipe": true, 00:22:01.705 "enable_zerocopy_send_client": false, 00:22:01.705 "enable_zerocopy_send_server": true, 00:22:01.705 "impl_name": "ssl", 00:22:01.705 "recv_buf_size": 4096, 00:22:01.705 "send_buf_size": 4096, 00:22:01.705 "tls_version": 0, 00:22:01.705 "zerocopy_threshold": 0 00:22:01.705 } 00:22:01.705 } 00:22:01.705 ] 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "subsystem": "vmd", 00:22:01.705 "config": [] 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "subsystem": "accel", 00:22:01.705 "config": [ 00:22:01.705 { 00:22:01.705 "method": "accel_set_options", 00:22:01.705 "params": { 00:22:01.705 "buf_count": 2048, 00:22:01.705 "large_cache_size": 16, 00:22:01.705 "sequence_count": 2048, 00:22:01.705 "small_cache_size": 128, 00:22:01.705 "task_count": 2048 00:22:01.705 } 00:22:01.705 } 00:22:01.705 ] 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "subsystem": "bdev", 00:22:01.705 "config": [ 00:22:01.705 { 00:22:01.705 "method": "bdev_set_options", 00:22:01.705 "params": { 00:22:01.705 "bdev_auto_examine": true, 00:22:01.705 "bdev_io_cache_size": 256, 00:22:01.705 "bdev_io_pool_size": 65535, 00:22:01.705 "iobuf_large_cache_size": 16, 00:22:01.705 "iobuf_small_cache_size": 128 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "bdev_raid_set_options", 00:22:01.705 "params": { 00:22:01.705 "process_window_size_kb": 1024 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "bdev_iscsi_set_options", 00:22:01.705 "params": { 00:22:01.705 "timeout_sec": 30 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "bdev_nvme_set_options", 00:22:01.705 "params": { 00:22:01.705 "action_on_timeout": "none", 00:22:01.705 "allow_accel_sequence": false, 00:22:01.705 "arbitration_burst": 0, 00:22:01.705 "bdev_retry_count": 3, 00:22:01.705 "ctrlr_loss_timeout_sec": 0, 00:22:01.705 "delay_cmd_submit": true, 00:22:01.705 "fast_io_fail_timeout_sec": 0, 00:22:01.705 "generate_uuids": false, 00:22:01.705 "high_priority_weight": 0, 00:22:01.705 "io_path_stat": false, 00:22:01.705 "io_queue_requests": 512, 00:22:01.705 "keep_alive_timeout_ms": 10000, 00:22:01.705 "low_priority_weight": 0, 00:22:01.705 "medium_priority_weight": 0, 00:22:01.705 "nvme_adminq_poll_period_us": 10000, 00:22:01.705 "nvme_ioq_poll_period_us": 0, 00:22:01.705 "reconnect_delay_sec": 0, 00:22:01.705 "timeout_admin_us": 0, 00:22:01.705 "timeout_us": 0, 00:22:01.705 "transport_ack_timeout": 0, 00:22:01.705 "transport_retry_count": 4, 00:22:01.705 "transport_tos": 0 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "bdev_nvme_attach_controller", 00:22:01.705 "params": { 00:22:01.705 "adrfam": "IPv4", 00:22:01.705 "ctrlr_loss_timeout_sec": 0, 00:22:01.705 "ddgst": false, 00:22:01.705 "fast_io_fail_timeout_sec": 0, 00:22:01.705 "hdgst": false, 00:22:01.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.705 "name": "TLSTEST", 00:22:01.705 "prchk_guard": false, 00:22:01.705 "prchk_reftag": false, 00:22:01.705 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:22:01.705 "reconnect_delay_sec": 0, 00:22:01.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.705 "traddr": "10.0.0.2", 00:22:01.705 "trsvcid": "4420", 00:22:01.705 "trtype": "TCP" 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "bdev_nvme_set_hotplug", 00:22:01.705 "params": { 00:22:01.705 "enable": false, 00:22:01.705 "period_us": 100000 00:22:01.705 } 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "method": "bdev_wait_for_examine" 00:22:01.705 } 00:22:01.705 ] 00:22:01.705 }, 00:22:01.705 { 00:22:01.705 "subsystem": "nbd", 00:22:01.705 "config": [] 00:22:01.705 } 00:22:01.706 ] 00:22:01.706 }' 00:22:01.706 11:49:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.965 [2024-11-20 11:49:34.787269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:01.965 [2024-11-20 11:49:34.787344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79139 ] 00:22:01.965 [2024-11-20 11:49:34.924242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.225 [2024-11-20 11:49:35.008923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.225 [2024-11-20 11:49:35.141364] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.795 11:49:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.795 11:49:35 -- common/autotest_common.sh@862 -- # return 0 00:22:02.795 11:49:35 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:02.795 Running I/O for 10 seconds... 00:22:12.790 00:22:12.790 Latency(us) 00:22:12.790 [2024-11-20T11:49:45.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.790 [2024-11-20T11:49:45.833Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:12.790 Verification LBA range: start 0x0 length 0x2000 00:22:12.790 TLSTESTn1 : 10.01 10009.19 39.10 0.00 0.00 12772.96 1509.62 19002.58 00:22:12.790 [2024-11-20T11:49:45.833Z] =================================================================================================================== 00:22:12.790 [2024-11-20T11:49:45.833Z] Total : 10009.19 39.10 0.00 0.00 12772.96 1509.62 19002.58 00:22:12.790 0 00:22:12.790 11:49:45 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.790 11:49:45 -- target/tls.sh@223 -- # killprocess 79139 00:22:12.790 11:49:45 -- common/autotest_common.sh@936 -- # '[' -z 79139 ']' 00:22:12.790 11:49:45 -- common/autotest_common.sh@940 -- # kill -0 79139 00:22:12.790 11:49:45 -- common/autotest_common.sh@941 -- # uname 00:22:12.790 11:49:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.790 11:49:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79139 00:22:12.790 11:49:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:12.790 11:49:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:12.790 killing process with pid 79139 00:22:12.790 11:49:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79139' 00:22:12.790 11:49:45 -- common/autotest_common.sh@955 -- # kill 79139 00:22:12.790 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.790 00:22:12.790 Latency(us) 00:22:12.790 [2024-11-20T11:49:45.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.790 [2024-11-20T11:49:45.833Z] =================================================================================================================== 00:22:12.790 [2024-11-20T11:49:45.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.790 11:49:45 -- common/autotest_common.sh@960 -- # wait 79139 00:22:13.049 11:49:46 -- target/tls.sh@224 -- # killprocess 79095 00:22:13.049 11:49:46 -- common/autotest_common.sh@936 -- # '[' -z 79095 ']' 00:22:13.049 11:49:46 -- common/autotest_common.sh@940 -- # kill -0 79095 00:22:13.049 11:49:46 -- common/autotest_common.sh@941 -- # uname 00:22:13.049 11:49:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:13.049 11:49:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79095 00:22:13.049 11:49:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:13.049 11:49:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:13.049 killing process with pid 79095 00:22:13.049 11:49:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79095' 00:22:13.049 11:49:46 -- common/autotest_common.sh@955 -- # kill 79095 00:22:13.049 11:49:46 -- common/autotest_common.sh@960 -- # wait 79095 00:22:13.309 11:49:46 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:22:13.309 11:49:46 -- target/tls.sh@227 -- # cleanup 00:22:13.309 11:49:46 -- target/tls.sh@15 -- # process_shm --id 0 00:22:13.309 11:49:46 -- common/autotest_common.sh@806 -- # type=--id 00:22:13.309 11:49:46 -- common/autotest_common.sh@807 -- # id=0 00:22:13.309 11:49:46 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:13.309 11:49:46 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:13.309 11:49:46 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:13.309 11:49:46 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:13.309 11:49:46 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:13.309 11:49:46 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:13.309 nvmf_trace.0 00:22:13.309 11:49:46 -- common/autotest_common.sh@821 -- # return 0 00:22:13.309 11:49:46 -- target/tls.sh@16 -- # killprocess 79139 00:22:13.309 11:49:46 -- common/autotest_common.sh@936 -- # '[' -z 79139 ']' 00:22:13.309 11:49:46 -- common/autotest_common.sh@940 -- # kill -0 79139 00:22:13.309 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79139) - No such process 00:22:13.309 Process with pid 79139 is not found 00:22:13.309 11:49:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79139 is not found' 00:22:13.309 11:49:46 -- target/tls.sh@17 -- # nvmftestfini 00:22:13.309 11:49:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.309 11:49:46 -- nvmf/common.sh@116 -- # sync 00:22:13.570 11:49:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.570 11:49:46 -- nvmf/common.sh@119 -- # set +e 00:22:13.570 11:49:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.570 11:49:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.570 rmmod nvme_tcp 00:22:13.570 rmmod nvme_fabrics 00:22:13.570 rmmod nvme_keyring 00:22:13.570 11:49:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.570 11:49:46 -- nvmf/common.sh@123 -- # set -e 00:22:13.570 11:49:46 -- nvmf/common.sh@124 -- # return 0 00:22:13.570 11:49:46 -- nvmf/common.sh@477 -- # '[' -n 79095 ']' 00:22:13.570 11:49:46 -- nvmf/common.sh@478 -- # killprocess 79095 00:22:13.570 11:49:46 -- common/autotest_common.sh@936 -- # '[' -z 79095 ']' 00:22:13.570 11:49:46 -- common/autotest_common.sh@940 -- # kill -0 79095 00:22:13.570 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79095) - No such process 00:22:13.570 Process with pid 79095 is not found 00:22:13.570 11:49:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79095 is not found' 00:22:13.570 11:49:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.570 11:49:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.570 11:49:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.570 11:49:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.570 11:49:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.570 11:49:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.570 11:49:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.570 11:49:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.570 11:49:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:13.570 11:49:46 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:22:13.570 00:22:13.570 real 1m8.460s 00:22:13.570 user 1m40.012s 00:22:13.570 sys 0m26.530s 00:22:13.570 11:49:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:13.570 11:49:46 -- common/autotest_common.sh@10 -- # set +x 00:22:13.570 ************************************ 00:22:13.570 END TEST nvmf_tls 00:22:13.570 ************************************ 00:22:13.570 11:49:46 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:13.570 11:49:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:13.570 11:49:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:13.570 11:49:46 -- common/autotest_common.sh@10 -- # set +x 00:22:13.570 ************************************ 00:22:13.570 START TEST nvmf_fips 00:22:13.570 ************************************ 00:22:13.570 11:49:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:13.831 * Looking for test storage... 00:22:13.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:22:13.831 11:49:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:13.831 11:49:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:13.831 11:49:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:13.831 11:49:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:13.831 11:49:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:13.831 11:49:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:13.831 11:49:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:13.831 11:49:46 -- scripts/common.sh@335 -- # IFS=.-: 00:22:13.831 11:49:46 -- scripts/common.sh@335 -- # read -ra ver1 00:22:13.831 11:49:46 -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.831 11:49:46 -- scripts/common.sh@336 -- # read -ra ver2 00:22:13.831 11:49:46 -- scripts/common.sh@337 -- # local 'op=<' 00:22:13.831 11:49:46 -- scripts/common.sh@339 -- # ver1_l=2 00:22:13.831 11:49:46 -- scripts/common.sh@340 -- # ver2_l=1 00:22:13.831 11:49:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:13.831 11:49:46 -- scripts/common.sh@343 -- # case "$op" in 00:22:13.831 11:49:46 -- scripts/common.sh@344 -- # : 1 00:22:13.831 11:49:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:13.831 11:49:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.831 11:49:46 -- scripts/common.sh@364 -- # decimal 1 00:22:13.831 11:49:46 -- scripts/common.sh@352 -- # local d=1 00:22:13.831 11:49:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.831 11:49:46 -- scripts/common.sh@354 -- # echo 1 00:22:13.831 11:49:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:13.831 11:49:46 -- scripts/common.sh@365 -- # decimal 2 00:22:13.831 11:49:46 -- scripts/common.sh@352 -- # local d=2 00:22:13.831 11:49:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.831 11:49:46 -- scripts/common.sh@354 -- # echo 2 00:22:13.831 11:49:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:13.831 11:49:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:13.831 11:49:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:13.831 11:49:46 -- scripts/common.sh@367 -- # return 0 00:22:13.831 11:49:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.831 11:49:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.831 --rc genhtml_branch_coverage=1 00:22:13.831 --rc genhtml_function_coverage=1 00:22:13.831 --rc genhtml_legend=1 00:22:13.831 --rc geninfo_all_blocks=1 00:22:13.831 --rc geninfo_unexecuted_blocks=1 00:22:13.831 00:22:13.831 ' 00:22:13.831 11:49:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.831 --rc genhtml_branch_coverage=1 00:22:13.831 --rc genhtml_function_coverage=1 00:22:13.831 --rc genhtml_legend=1 00:22:13.831 --rc geninfo_all_blocks=1 00:22:13.831 --rc geninfo_unexecuted_blocks=1 00:22:13.831 00:22:13.831 ' 00:22:13.831 11:49:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.831 --rc genhtml_branch_coverage=1 00:22:13.831 --rc genhtml_function_coverage=1 00:22:13.831 --rc genhtml_legend=1 00:22:13.831 --rc geninfo_all_blocks=1 00:22:13.831 --rc geninfo_unexecuted_blocks=1 00:22:13.831 00:22:13.831 ' 00:22:13.831 11:49:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:13.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.831 --rc genhtml_branch_coverage=1 00:22:13.831 --rc genhtml_function_coverage=1 00:22:13.831 --rc genhtml_legend=1 00:22:13.831 --rc geninfo_all_blocks=1 00:22:13.831 --rc geninfo_unexecuted_blocks=1 00:22:13.831 00:22:13.831 ' 00:22:13.831 11:49:46 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:13.831 11:49:46 -- nvmf/common.sh@7 -- # uname -s 00:22:13.831 11:49:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.831 11:49:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.831 11:49:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.831 11:49:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.831 11:49:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.831 11:49:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.831 11:49:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.831 11:49:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.831 11:49:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.831 11:49:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.831 11:49:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:22:13.831 11:49:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:22:13.831 11:49:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.831 11:49:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.831 11:49:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:13.831 11:49:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.831 11:49:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.831 11:49:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.831 11:49:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.831 11:49:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.832 11:49:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.832 11:49:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.832 11:49:46 -- paths/export.sh@5 -- # export PATH 00:22:13.832 11:49:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.832 11:49:46 -- nvmf/common.sh@46 -- # : 0 00:22:13.832 11:49:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:13.832 11:49:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:13.832 11:49:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:13.832 11:49:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.832 11:49:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.832 11:49:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:13.832 11:49:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:13.832 11:49:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:13.832 11:49:46 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:13.832 11:49:46 -- fips/fips.sh@89 -- # check_openssl_version 00:22:13.832 11:49:46 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:13.832 11:49:46 -- fips/fips.sh@85 -- # openssl version 00:22:13.832 11:49:46 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:13.832 11:49:46 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:22:13.832 11:49:46 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:13.832 11:49:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:13.832 11:49:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:13.832 11:49:46 -- scripts/common.sh@335 -- # IFS=.-: 00:22:13.832 11:49:46 -- scripts/common.sh@335 -- # read -ra ver1 00:22:13.832 11:49:46 -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.832 11:49:46 -- scripts/common.sh@336 -- # read -ra ver2 00:22:13.832 11:49:46 -- scripts/common.sh@337 -- # local 'op=>=' 00:22:13.832 11:49:46 -- scripts/common.sh@339 -- # ver1_l=3 00:22:13.832 11:49:46 -- scripts/common.sh@340 -- # ver2_l=3 00:22:13.832 11:49:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:13.832 11:49:46 -- scripts/common.sh@343 -- # case "$op" in 00:22:13.832 11:49:46 -- scripts/common.sh@347 -- # : 1 00:22:13.832 11:49:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:13.832 11:49:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.832 11:49:46 -- scripts/common.sh@364 -- # decimal 3 00:22:13.832 11:49:46 -- scripts/common.sh@352 -- # local d=3 00:22:13.832 11:49:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:13.832 11:49:46 -- scripts/common.sh@354 -- # echo 3 00:22:13.832 11:49:46 -- scripts/common.sh@364 -- # ver1[v]=3 00:22:13.832 11:49:46 -- scripts/common.sh@365 -- # decimal 3 00:22:13.832 11:49:46 -- scripts/common.sh@352 -- # local d=3 00:22:13.832 11:49:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:13.832 11:49:46 -- scripts/common.sh@354 -- # echo 3 00:22:13.832 11:49:46 -- scripts/common.sh@365 -- # ver2[v]=3 00:22:13.832 11:49:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:13.832 11:49:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:13.832 11:49:46 -- scripts/common.sh@363 -- # (( v++ )) 00:22:13.832 11:49:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.092 11:49:46 -- scripts/common.sh@364 -- # decimal 1 00:22:14.092 11:49:46 -- scripts/common.sh@352 -- # local d=1 00:22:14.092 11:49:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.092 11:49:46 -- scripts/common.sh@354 -- # echo 1 00:22:14.092 11:49:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:14.092 11:49:46 -- scripts/common.sh@365 -- # decimal 0 00:22:14.092 11:49:46 -- scripts/common.sh@352 -- # local d=0 00:22:14.092 11:49:46 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:14.092 11:49:46 -- scripts/common.sh@354 -- # echo 0 00:22:14.092 11:49:46 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:14.092 11:49:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:14.092 11:49:46 -- scripts/common.sh@366 -- # return 0 00:22:14.092 11:49:46 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:14.092 11:49:46 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:14.092 11:49:46 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:14.092 11:49:46 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:14.092 11:49:46 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:14.092 11:49:46 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:14.092 11:49:46 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:14.092 11:49:46 -- fips/fips.sh@113 -- # build_openssl_config 00:22:14.092 11:49:46 -- fips/fips.sh@37 -- # cat 00:22:14.092 11:49:46 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:14.092 11:49:46 -- fips/fips.sh@58 -- # cat - 00:22:14.092 11:49:46 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:14.092 11:49:46 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:14.092 11:49:46 -- fips/fips.sh@116 -- # mapfile -t providers 00:22:14.092 11:49:46 -- fips/fips.sh@116 -- # openssl list -providers 00:22:14.092 11:49:46 -- fips/fips.sh@116 -- # grep name 00:22:14.092 11:49:46 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:14.092 11:49:46 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:14.092 11:49:46 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:14.092 11:49:46 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:14.092 11:49:46 -- fips/fips.sh@127 -- # : 00:22:14.092 11:49:46 -- common/autotest_common.sh@650 -- # local es=0 00:22:14.092 11:49:46 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:14.092 11:49:46 -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:14.092 11:49:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.092 11:49:46 -- common/autotest_common.sh@642 -- # type -t openssl 00:22:14.092 11:49:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.092 11:49:46 -- common/autotest_common.sh@644 -- # type -P openssl 00:22:14.092 11:49:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.092 11:49:46 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:14.092 11:49:46 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:14.092 11:49:46 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:14.092 Error setting digest 00:22:14.092 40D2C048D47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:14.092 40D2C048D47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:14.092 11:49:47 -- common/autotest_common.sh@653 -- # es=1 00:22:14.092 11:49:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.092 11:49:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.092 11:49:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.092 11:49:47 -- fips/fips.sh@130 -- # nvmftestinit 00:22:14.092 11:49:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:14.092 11:49:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.092 11:49:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:14.092 11:49:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:14.092 11:49:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:14.092 11:49:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.092 11:49:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.092 11:49:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.092 11:49:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:14.092 11:49:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:14.092 11:49:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:14.092 11:49:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:14.092 11:49:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:14.092 11:49:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:14.092 11:49:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.092 11:49:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.092 11:49:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:14.092 11:49:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:14.092 11:49:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:14.092 11:49:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:14.092 11:49:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:14.092 11:49:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.092 11:49:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:14.092 11:49:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:14.092 11:49:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:14.092 11:49:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:14.092 11:49:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:14.092 11:49:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:14.092 Cannot find device "nvmf_tgt_br" 00:22:14.092 11:49:47 -- nvmf/common.sh@154 -- # true 00:22:14.092 11:49:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.092 Cannot find device "nvmf_tgt_br2" 00:22:14.092 11:49:47 -- nvmf/common.sh@155 -- # true 00:22:14.092 11:49:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:14.092 11:49:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:14.092 Cannot find device "nvmf_tgt_br" 00:22:14.092 11:49:47 -- nvmf/common.sh@157 -- # true 00:22:14.092 11:49:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:14.352 Cannot find device "nvmf_tgt_br2" 00:22:14.352 11:49:47 -- nvmf/common.sh@158 -- # true 00:22:14.352 11:49:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:14.352 11:49:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:14.352 11:49:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.352 11:49:47 -- nvmf/common.sh@161 -- # true 00:22:14.352 11:49:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.352 11:49:47 -- nvmf/common.sh@162 -- # true 00:22:14.352 11:49:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:14.352 11:49:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:14.352 11:49:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:14.352 11:49:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.352 11:49:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.352 11:49:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.352 11:49:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.352 11:49:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:14.352 11:49:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:14.352 11:49:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:14.352 11:49:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:14.352 11:49:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:14.352 11:49:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:14.352 11:49:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.352 11:49:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.352 11:49:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.352 11:49:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:14.352 11:49:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:14.352 11:49:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.352 11:49:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.352 11:49:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.352 11:49:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.352 11:49:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.352 11:49:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:14.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:22:14.353 00:22:14.353 --- 10.0.0.2 ping statistics --- 00:22:14.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.353 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:14.353 11:49:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:14.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:22:14.353 00:22:14.353 --- 10.0.0.3 ping statistics --- 00:22:14.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.353 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:14.353 11:49:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:22:14.613 00:22:14.613 --- 10.0.0.1 ping statistics --- 00:22:14.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.613 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:14.613 11:49:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.613 11:49:47 -- nvmf/common.sh@421 -- # return 0 00:22:14.613 11:49:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:14.613 11:49:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.613 11:49:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:14.613 11:49:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:14.613 11:49:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.613 11:49:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:14.613 11:49:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:14.613 11:49:47 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:14.613 11:49:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:14.613 11:49:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.613 11:49:47 -- common/autotest_common.sh@10 -- # set +x 00:22:14.613 11:49:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:14.613 11:49:47 -- nvmf/common.sh@469 -- # nvmfpid=79508 00:22:14.613 11:49:47 -- nvmf/common.sh@470 -- # waitforlisten 79508 00:22:14.613 11:49:47 -- common/autotest_common.sh@829 -- # '[' -z 79508 ']' 00:22:14.613 11:49:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.613 11:49:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.613 11:49:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.613 11:49:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.613 11:49:47 -- common/autotest_common.sh@10 -- # set +x 00:22:14.613 [2024-11-20 11:49:47.490770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:14.613 [2024-11-20 11:49:47.490842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.613 [2024-11-20 11:49:47.625816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.873 [2024-11-20 11:49:47.700231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.873 [2024-11-20 11:49:47.700365] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.873 [2024-11-20 11:49:47.700372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.873 [2024-11-20 11:49:47.700377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.873 [2024-11-20 11:49:47.700398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.443 11:49:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.443 11:49:48 -- common/autotest_common.sh@862 -- # return 0 00:22:15.443 11:49:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:15.443 11:49:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.443 11:49:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.443 11:49:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.443 11:49:48 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:15.443 11:49:48 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:15.443 11:49:48 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:15.443 11:49:48 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:15.443 11:49:48 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:15.443 11:49:48 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:15.443 11:49:48 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:15.443 11:49:48 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.703 [2024-11-20 11:49:48.566154] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.703 [2024-11-20 11:49:48.582088] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.703 [2024-11-20 11:49:48.582241] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.703 malloc0 00:22:15.703 11:49:48 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.703 11:49:48 -- fips/fips.sh@147 -- # bdevperf_pid=79560 00:22:15.703 11:49:48 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.703 11:49:48 -- fips/fips.sh@148 -- # waitforlisten 79560 /var/tmp/bdevperf.sock 00:22:15.703 11:49:48 -- common/autotest_common.sh@829 -- # '[' -z 79560 ']' 00:22:15.703 11:49:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.703 11:49:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.703 11:49:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.703 11:49:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.703 11:49:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.704 [2024-11-20 11:49:48.721183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:15.704 [2024-11-20 11:49:48.721262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79560 ] 00:22:15.964 [2024-11-20 11:49:48.858061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.964 [2024-11-20 11:49:48.999427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.533 11:49:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.533 11:49:49 -- common/autotest_common.sh@862 -- # return 0 00:22:16.533 11:49:49 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:16.793 [2024-11-20 11:49:49.705722] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.793 TLSTESTn1 00:22:16.793 11:49:49 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.052 Running I/O for 10 seconds... 00:22:27.043 00:22:27.043 Latency(us) 00:22:27.043 [2024-11-20T11:50:00.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.043 [2024-11-20T11:50:00.086Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.043 Verification LBA range: start 0x0 length 0x2000 00:22:27.043 TLSTESTn1 : 10.01 8481.39 33.13 0.00 0.00 15071.63 4035.19 21292.05 00:22:27.043 [2024-11-20T11:50:00.086Z] =================================================================================================================== 00:22:27.043 [2024-11-20T11:50:00.086Z] Total : 8481.39 33.13 0.00 0.00 15071.63 4035.19 21292.05 00:22:27.043 0 00:22:27.043 11:49:59 -- fips/fips.sh@1 -- # cleanup 00:22:27.043 11:49:59 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:27.043 11:49:59 -- common/autotest_common.sh@806 -- # type=--id 00:22:27.043 11:49:59 -- common/autotest_common.sh@807 -- # id=0 00:22:27.043 11:49:59 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:27.043 11:49:59 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:27.043 11:49:59 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:27.043 11:49:59 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:27.043 11:49:59 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:27.043 11:49:59 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:27.043 nvmf_trace.0 00:22:27.043 11:49:59 -- common/autotest_common.sh@821 -- # return 0 00:22:27.043 11:49:59 -- fips/fips.sh@16 -- # killprocess 79560 00:22:27.043 11:49:59 -- common/autotest_common.sh@936 -- # '[' -z 79560 ']' 00:22:27.043 11:49:59 -- common/autotest_common.sh@940 -- # kill -0 79560 00:22:27.043 11:49:59 -- common/autotest_common.sh@941 -- # uname 00:22:27.043 11:50:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.043 11:50:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79560 00:22:27.044 11:50:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:27.044 11:50:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:27.044 11:50:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79560' 00:22:27.044 killing process with pid 79560 00:22:27.044 11:50:00 -- common/autotest_common.sh@955 -- # kill 79560 00:22:27.044 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.044 00:22:27.044 Latency(us) 00:22:27.044 [2024-11-20T11:50:00.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.044 [2024-11-20T11:50:00.087Z] =================================================================================================================== 00:22:27.044 [2024-11-20T11:50:00.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.044 11:50:00 -- common/autotest_common.sh@960 -- # wait 79560 00:22:27.616 11:50:00 -- fips/fips.sh@17 -- # nvmftestfini 00:22:27.616 11:50:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:27.616 11:50:00 -- nvmf/common.sh@116 -- # sync 00:22:27.616 11:50:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:27.616 11:50:00 -- nvmf/common.sh@119 -- # set +e 00:22:27.616 11:50:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:27.616 11:50:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:27.616 rmmod nvme_tcp 00:22:27.616 rmmod nvme_fabrics 00:22:27.616 rmmod nvme_keyring 00:22:27.616 11:50:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:27.616 11:50:00 -- nvmf/common.sh@123 -- # set -e 00:22:27.616 11:50:00 -- nvmf/common.sh@124 -- # return 0 00:22:27.616 11:50:00 -- nvmf/common.sh@477 -- # '[' -n 79508 ']' 00:22:27.616 11:50:00 -- nvmf/common.sh@478 -- # killprocess 79508 00:22:27.616 11:50:00 -- common/autotest_common.sh@936 -- # '[' -z 79508 ']' 00:22:27.616 11:50:00 -- common/autotest_common.sh@940 -- # kill -0 79508 00:22:27.616 11:50:00 -- common/autotest_common.sh@941 -- # uname 00:22:27.616 11:50:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.616 11:50:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79508 00:22:27.616 11:50:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:27.616 11:50:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:27.616 killing process with pid 79508 00:22:27.616 11:50:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79508' 00:22:27.616 11:50:00 -- common/autotest_common.sh@955 -- # kill 79508 00:22:27.616 11:50:00 -- common/autotest_common.sh@960 -- # wait 79508 00:22:27.878 11:50:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:27.878 11:50:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:27.878 11:50:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:27.878 11:50:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.878 11:50:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:27.878 11:50:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.878 11:50:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.878 11:50:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.878 11:50:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:27.878 11:50:00 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:27.878 00:22:27.879 real 0m14.278s 00:22:27.879 user 0m17.356s 00:22:27.879 sys 0m6.824s 00:22:27.879 11:50:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:27.879 11:50:00 -- common/autotest_common.sh@10 -- # set +x 00:22:27.879 ************************************ 00:22:27.879 END TEST nvmf_fips 00:22:27.879 ************************************ 00:22:27.879 11:50:00 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:27.879 11:50:00 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:27.879 11:50:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:27.879 11:50:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:27.879 11:50:00 -- common/autotest_common.sh@10 -- # set +x 00:22:27.879 ************************************ 00:22:27.879 START TEST nvmf_fuzz 00:22:27.879 ************************************ 00:22:27.879 11:50:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:28.139 * Looking for test storage... 00:22:28.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:28.139 11:50:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:28.139 11:50:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:28.139 11:50:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:28.139 11:50:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:28.139 11:50:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:28.139 11:50:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:28.139 11:50:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:28.139 11:50:01 -- scripts/common.sh@335 -- # IFS=.-: 00:22:28.139 11:50:01 -- scripts/common.sh@335 -- # read -ra ver1 00:22:28.139 11:50:01 -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.139 11:50:01 -- scripts/common.sh@336 -- # read -ra ver2 00:22:28.139 11:50:01 -- scripts/common.sh@337 -- # local 'op=<' 00:22:28.139 11:50:01 -- scripts/common.sh@339 -- # ver1_l=2 00:22:28.139 11:50:01 -- scripts/common.sh@340 -- # ver2_l=1 00:22:28.139 11:50:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:28.139 11:50:01 -- scripts/common.sh@343 -- # case "$op" in 00:22:28.139 11:50:01 -- scripts/common.sh@344 -- # : 1 00:22:28.139 11:50:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:28.139 11:50:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.139 11:50:01 -- scripts/common.sh@364 -- # decimal 1 00:22:28.139 11:50:01 -- scripts/common.sh@352 -- # local d=1 00:22:28.139 11:50:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.139 11:50:01 -- scripts/common.sh@354 -- # echo 1 00:22:28.139 11:50:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:28.139 11:50:01 -- scripts/common.sh@365 -- # decimal 2 00:22:28.139 11:50:01 -- scripts/common.sh@352 -- # local d=2 00:22:28.139 11:50:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.139 11:50:01 -- scripts/common.sh@354 -- # echo 2 00:22:28.139 11:50:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:28.139 11:50:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:28.139 11:50:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:28.139 11:50:01 -- scripts/common.sh@367 -- # return 0 00:22:28.139 11:50:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.139 11:50:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:28.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.139 --rc genhtml_branch_coverage=1 00:22:28.139 --rc genhtml_function_coverage=1 00:22:28.139 --rc genhtml_legend=1 00:22:28.139 --rc geninfo_all_blocks=1 00:22:28.139 --rc geninfo_unexecuted_blocks=1 00:22:28.139 00:22:28.139 ' 00:22:28.139 11:50:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:28.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.139 --rc genhtml_branch_coverage=1 00:22:28.139 --rc genhtml_function_coverage=1 00:22:28.139 --rc genhtml_legend=1 00:22:28.139 --rc geninfo_all_blocks=1 00:22:28.139 --rc geninfo_unexecuted_blocks=1 00:22:28.140 00:22:28.140 ' 00:22:28.140 11:50:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.140 --rc genhtml_branch_coverage=1 00:22:28.140 --rc genhtml_function_coverage=1 00:22:28.140 --rc genhtml_legend=1 00:22:28.140 --rc geninfo_all_blocks=1 00:22:28.140 --rc geninfo_unexecuted_blocks=1 00:22:28.140 00:22:28.140 ' 00:22:28.140 11:50:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.140 --rc genhtml_branch_coverage=1 00:22:28.140 --rc genhtml_function_coverage=1 00:22:28.140 --rc genhtml_legend=1 00:22:28.140 --rc geninfo_all_blocks=1 00:22:28.140 --rc geninfo_unexecuted_blocks=1 00:22:28.140 00:22:28.140 ' 00:22:28.140 11:50:01 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.140 11:50:01 -- nvmf/common.sh@7 -- # uname -s 00:22:28.140 11:50:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.140 11:50:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.140 11:50:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.140 11:50:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.140 11:50:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.140 11:50:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.140 11:50:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.140 11:50:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.140 11:50:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.140 11:50:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.140 11:50:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:22:28.140 11:50:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:22:28.140 11:50:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.140 11:50:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.140 11:50:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.140 11:50:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.140 11:50:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.140 11:50:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.140 11:50:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.140 11:50:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.140 11:50:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.140 11:50:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.140 11:50:01 -- paths/export.sh@5 -- # export PATH 00:22:28.140 11:50:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.140 11:50:01 -- nvmf/common.sh@46 -- # : 0 00:22:28.140 11:50:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:28.140 11:50:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:28.140 11:50:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:28.140 11:50:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.140 11:50:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.140 11:50:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:28.140 11:50:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:28.140 11:50:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:28.140 11:50:01 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:28.140 11:50:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:28.140 11:50:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.140 11:50:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:28.140 11:50:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:28.140 11:50:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:28.140 11:50:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.400 11:50:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.400 11:50:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.400 11:50:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:28.400 11:50:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:28.400 11:50:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:28.400 11:50:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:28.400 11:50:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:28.400 11:50:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:28.400 11:50:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.400 11:50:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.400 11:50:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:28.400 11:50:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:28.401 11:50:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.401 11:50:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.401 11:50:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.401 11:50:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.401 11:50:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.401 11:50:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.401 11:50:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.401 11:50:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.401 11:50:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:28.401 11:50:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:28.401 Cannot find device "nvmf_tgt_br" 00:22:28.401 11:50:01 -- nvmf/common.sh@154 -- # true 00:22:28.401 11:50:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.401 Cannot find device "nvmf_tgt_br2" 00:22:28.401 11:50:01 -- nvmf/common.sh@155 -- # true 00:22:28.401 11:50:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:28.401 11:50:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:28.401 Cannot find device "nvmf_tgt_br" 00:22:28.401 11:50:01 -- nvmf/common.sh@157 -- # true 00:22:28.401 11:50:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:28.401 Cannot find device "nvmf_tgt_br2" 00:22:28.401 11:50:01 -- nvmf/common.sh@158 -- # true 00:22:28.401 11:50:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:28.401 11:50:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:28.401 11:50:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.401 11:50:01 -- nvmf/common.sh@161 -- # true 00:22:28.401 11:50:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.401 11:50:01 -- nvmf/common.sh@162 -- # true 00:22:28.401 11:50:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:28.401 11:50:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:28.401 11:50:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:28.401 11:50:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:28.401 11:50:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:28.401 11:50:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:28.401 11:50:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:28.401 11:50:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:28.401 11:50:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:28.401 11:50:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:28.401 11:50:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:28.401 11:50:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:28.401 11:50:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:28.401 11:50:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:28.401 11:50:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:28.401 11:50:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:28.660 11:50:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:28.660 11:50:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:28.660 11:50:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:28.660 11:50:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:28.660 11:50:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:28.660 11:50:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:28.660 11:50:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:28.660 11:50:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:28.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:22:28.660 00:22:28.660 --- 10.0.0.2 ping statistics --- 00:22:28.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.660 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:28.660 11:50:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:28.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:28.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:22:28.660 00:22:28.660 --- 10.0.0.3 ping statistics --- 00:22:28.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.661 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:28.661 11:50:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:28.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:22:28.661 00:22:28.661 --- 10.0.0.1 ping statistics --- 00:22:28.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.661 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:28.661 11:50:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.661 11:50:01 -- nvmf/common.sh@421 -- # return 0 00:22:28.661 11:50:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:28.661 11:50:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.661 11:50:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:28.661 11:50:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:28.661 11:50:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.661 11:50:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:28.661 11:50:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:28.661 11:50:01 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79915 00:22:28.661 11:50:01 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:28.661 11:50:01 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79915 00:22:28.661 11:50:01 -- common/autotest_common.sh@829 -- # '[' -z 79915 ']' 00:22:28.661 11:50:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.661 11:50:01 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:28.661 11:50:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.661 11:50:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.661 11:50:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.661 11:50:01 -- common/autotest_common.sh@10 -- # set +x 00:22:29.601 11:50:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.601 11:50:02 -- common/autotest_common.sh@862 -- # return 0 00:22:29.601 11:50:02 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.601 11:50:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.601 11:50:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.601 11:50:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.601 11:50:02 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:29.601 11:50:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.602 11:50:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.602 Malloc0 00:22:29.602 11:50:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.602 11:50:02 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.602 11:50:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.602 11:50:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.602 11:50:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.602 11:50:02 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.602 11:50:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.602 11:50:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.602 11:50:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.602 11:50:02 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.602 11:50:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.602 11:50:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.602 11:50:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.602 11:50:02 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:29.602 11:50:02 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:29.861 Shutting down the fuzz application 00:22:29.861 11:50:02 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:30.431 Shutting down the fuzz application 00:22:30.431 11:50:03 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.431 11:50:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.431 11:50:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.431 11:50:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.431 11:50:03 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:30.431 11:50:03 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:30.431 11:50:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:30.431 11:50:03 -- nvmf/common.sh@116 -- # sync 00:22:30.431 11:50:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:30.431 11:50:03 -- nvmf/common.sh@119 -- # set +e 00:22:30.431 11:50:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:30.431 11:50:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:30.431 rmmod nvme_tcp 00:22:30.431 rmmod nvme_fabrics 00:22:30.431 rmmod nvme_keyring 00:22:30.431 11:50:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:30.431 11:50:03 -- nvmf/common.sh@123 -- # set -e 00:22:30.431 11:50:03 -- nvmf/common.sh@124 -- # return 0 00:22:30.431 11:50:03 -- nvmf/common.sh@477 -- # '[' -n 79915 ']' 00:22:30.431 11:50:03 -- nvmf/common.sh@478 -- # killprocess 79915 00:22:30.431 11:50:03 -- common/autotest_common.sh@936 -- # '[' -z 79915 ']' 00:22:30.431 11:50:03 -- common/autotest_common.sh@940 -- # kill -0 79915 00:22:30.431 11:50:03 -- common/autotest_common.sh@941 -- # uname 00:22:30.431 11:50:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.431 11:50:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79915 00:22:30.431 11:50:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.431 11:50:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.431 killing process with pid 79915 00:22:30.431 11:50:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79915' 00:22:30.431 11:50:03 -- common/autotest_common.sh@955 -- # kill 79915 00:22:30.431 11:50:03 -- common/autotest_common.sh@960 -- # wait 79915 00:22:30.690 11:50:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:30.690 11:50:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:30.690 11:50:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:30.691 11:50:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.691 11:50:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:30.691 11:50:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.691 11:50:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.691 11:50:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.691 11:50:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:30.691 11:50:03 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:22:30.691 00:22:30.691 real 0m2.775s 00:22:30.691 user 0m2.777s 00:22:30.691 sys 0m0.697s 00:22:30.691 11:50:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:30.691 11:50:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.691 ************************************ 00:22:30.691 END TEST nvmf_fuzz 00:22:30.691 ************************************ 00:22:30.950 11:50:03 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:30.950 11:50:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:30.950 11:50:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:30.950 11:50:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.950 ************************************ 00:22:30.950 START TEST nvmf_multiconnection 00:22:30.950 ************************************ 00:22:30.950 11:50:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:30.950 * Looking for test storage... 00:22:30.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:30.950 11:50:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:30.950 11:50:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:30.950 11:50:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:30.950 11:50:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:30.950 11:50:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:30.951 11:50:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:30.951 11:50:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:30.951 11:50:03 -- scripts/common.sh@335 -- # IFS=.-: 00:22:30.951 11:50:03 -- scripts/common.sh@335 -- # read -ra ver1 00:22:30.951 11:50:03 -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.951 11:50:03 -- scripts/common.sh@336 -- # read -ra ver2 00:22:30.951 11:50:03 -- scripts/common.sh@337 -- # local 'op=<' 00:22:30.951 11:50:03 -- scripts/common.sh@339 -- # ver1_l=2 00:22:30.951 11:50:03 -- scripts/common.sh@340 -- # ver2_l=1 00:22:30.951 11:50:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:30.951 11:50:03 -- scripts/common.sh@343 -- # case "$op" in 00:22:30.951 11:50:03 -- scripts/common.sh@344 -- # : 1 00:22:30.951 11:50:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:30.951 11:50:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.951 11:50:03 -- scripts/common.sh@364 -- # decimal 1 00:22:30.951 11:50:03 -- scripts/common.sh@352 -- # local d=1 00:22:30.951 11:50:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.951 11:50:03 -- scripts/common.sh@354 -- # echo 1 00:22:30.951 11:50:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:30.951 11:50:03 -- scripts/common.sh@365 -- # decimal 2 00:22:30.951 11:50:03 -- scripts/common.sh@352 -- # local d=2 00:22:30.951 11:50:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.951 11:50:03 -- scripts/common.sh@354 -- # echo 2 00:22:30.951 11:50:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:30.951 11:50:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:30.951 11:50:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:30.951 11:50:03 -- scripts/common.sh@367 -- # return 0 00:22:30.951 11:50:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.951 11:50:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:30.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.951 --rc genhtml_branch_coverage=1 00:22:30.951 --rc genhtml_function_coverage=1 00:22:30.951 --rc genhtml_legend=1 00:22:30.951 --rc geninfo_all_blocks=1 00:22:30.951 --rc geninfo_unexecuted_blocks=1 00:22:30.951 00:22:30.951 ' 00:22:30.951 11:50:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:30.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.951 --rc genhtml_branch_coverage=1 00:22:30.951 --rc genhtml_function_coverage=1 00:22:30.951 --rc genhtml_legend=1 00:22:30.951 --rc geninfo_all_blocks=1 00:22:30.951 --rc geninfo_unexecuted_blocks=1 00:22:30.951 00:22:30.951 ' 00:22:30.951 11:50:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:30.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.951 --rc genhtml_branch_coverage=1 00:22:30.951 --rc genhtml_function_coverage=1 00:22:30.951 --rc genhtml_legend=1 00:22:30.951 --rc geninfo_all_blocks=1 00:22:30.951 --rc geninfo_unexecuted_blocks=1 00:22:30.951 00:22:30.951 ' 00:22:30.951 11:50:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:30.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.951 --rc genhtml_branch_coverage=1 00:22:30.951 --rc genhtml_function_coverage=1 00:22:30.951 --rc genhtml_legend=1 00:22:30.951 --rc geninfo_all_blocks=1 00:22:30.951 --rc geninfo_unexecuted_blocks=1 00:22:30.951 00:22:30.951 ' 00:22:30.951 11:50:03 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.951 11:50:03 -- nvmf/common.sh@7 -- # uname -s 00:22:30.951 11:50:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.951 11:50:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.951 11:50:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.951 11:50:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.951 11:50:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.951 11:50:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.951 11:50:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.951 11:50:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.951 11:50:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.951 11:50:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.212 11:50:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:22:31.212 11:50:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:22:31.212 11:50:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.212 11:50:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.212 11:50:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.212 11:50:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.212 11:50:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.212 11:50:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.212 11:50:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.212 11:50:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.212 11:50:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.212 11:50:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.212 11:50:04 -- paths/export.sh@5 -- # export PATH 00:22:31.212 11:50:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.212 11:50:04 -- nvmf/common.sh@46 -- # : 0 00:22:31.212 11:50:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:31.212 11:50:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:31.212 11:50:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:31.212 11:50:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.212 11:50:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.212 11:50:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:31.212 11:50:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:31.212 11:50:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:31.212 11:50:04 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.212 11:50:04 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.212 11:50:04 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:31.212 11:50:04 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:31.212 11:50:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:31.212 11:50:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.212 11:50:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:31.213 11:50:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:31.213 11:50:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:31.213 11:50:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.213 11:50:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.213 11:50:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.213 11:50:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:31.213 11:50:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:31.213 11:50:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:31.213 11:50:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:31.213 11:50:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:31.213 11:50:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:31.213 11:50:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.213 11:50:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.213 11:50:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:31.213 11:50:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:31.213 11:50:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.213 11:50:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.213 11:50:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.213 11:50:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.213 11:50:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.213 11:50:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.213 11:50:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.213 11:50:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.213 11:50:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:31.213 11:50:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:31.213 Cannot find device "nvmf_tgt_br" 00:22:31.213 11:50:04 -- nvmf/common.sh@154 -- # true 00:22:31.213 11:50:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.213 Cannot find device "nvmf_tgt_br2" 00:22:31.213 11:50:04 -- nvmf/common.sh@155 -- # true 00:22:31.213 11:50:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:31.213 11:50:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:31.213 Cannot find device "nvmf_tgt_br" 00:22:31.213 11:50:04 -- nvmf/common.sh@157 -- # true 00:22:31.213 11:50:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:31.213 Cannot find device "nvmf_tgt_br2" 00:22:31.213 11:50:04 -- nvmf/common.sh@158 -- # true 00:22:31.213 11:50:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:31.213 11:50:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:31.213 11:50:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.213 11:50:04 -- nvmf/common.sh@161 -- # true 00:22:31.213 11:50:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.213 11:50:04 -- nvmf/common.sh@162 -- # true 00:22:31.213 11:50:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:31.213 11:50:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:31.213 11:50:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:31.213 11:50:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:31.213 11:50:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:31.213 11:50:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:31.473 11:50:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:31.473 11:50:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:31.473 11:50:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:31.473 11:50:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:31.473 11:50:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:31.473 11:50:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:31.473 11:50:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:31.473 11:50:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:31.473 11:50:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:31.473 11:50:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:31.473 11:50:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:31.473 11:50:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:31.473 11:50:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:31.473 11:50:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:31.473 11:50:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:31.473 11:50:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:31.473 11:50:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:31.473 11:50:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:31.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:22:31.473 00:22:31.473 --- 10.0.0.2 ping statistics --- 00:22:31.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.473 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:22:31.473 11:50:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:31.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:31.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:22:31.473 00:22:31.473 --- 10.0.0.3 ping statistics --- 00:22:31.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.473 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:31.473 11:50:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:31.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:31.473 00:22:31.473 --- 10.0.0.1 ping statistics --- 00:22:31.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.473 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:31.473 11:50:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.473 11:50:04 -- nvmf/common.sh@421 -- # return 0 00:22:31.473 11:50:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:31.473 11:50:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.473 11:50:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:31.473 11:50:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:31.473 11:50:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.473 11:50:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:31.473 11:50:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:31.473 11:50:04 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:31.473 11:50:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:31.473 11:50:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.473 11:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:31.473 11:50:04 -- nvmf/common.sh@469 -- # nvmfpid=80139 00:22:31.473 11:50:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.473 11:50:04 -- nvmf/common.sh@470 -- # waitforlisten 80139 00:22:31.473 11:50:04 -- common/autotest_common.sh@829 -- # '[' -z 80139 ']' 00:22:31.474 11:50:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.474 11:50:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.474 11:50:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.474 11:50:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.474 11:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:31.474 [2024-11-20 11:50:04.462801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:31.474 [2024-11-20 11:50:04.462861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.733 [2024-11-20 11:50:04.599931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.733 [2024-11-20 11:50:04.682729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:31.733 [2024-11-20 11:50:04.682867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.733 [2024-11-20 11:50:04.682874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.733 [2024-11-20 11:50:04.682879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.733 [2024-11-20 11:50:04.683092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.733 [2024-11-20 11:50:04.683300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.733 [2024-11-20 11:50:04.683397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.733 [2024-11-20 11:50:04.683404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.303 11:50:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.303 11:50:05 -- common/autotest_common.sh@862 -- # return 0 00:22:32.303 11:50:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:32.303 11:50:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.303 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.563 11:50:05 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 [2024-11-20 11:50:05.363780] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:32.563 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.563 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 Malloc1 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 [2024-11-20 11:50:05.431083] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.563 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 Malloc2 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.563 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 Malloc3 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:32.563 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.564 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:32.564 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.564 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.564 Malloc4 00:22:32.564 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.564 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:32.564 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.564 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.564 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.564 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:32.564 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.564 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.564 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.564 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:32.564 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.564 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.827 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.827 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.827 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:32.827 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.827 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.827 Malloc5 00:22:32.827 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.827 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:32.827 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.827 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.827 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.827 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:32.827 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.827 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.827 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.827 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:32.827 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.827 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.828 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 Malloc6 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.828 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 Malloc7 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.828 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 Malloc8 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.828 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.828 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:32.828 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.828 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 Malloc9 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.103 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 Malloc10 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.103 11:50:05 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 Malloc11 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:05 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:33.103 11:50:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:33.103 11:50:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.103 11:50:06 -- common/autotest_common.sh@10 -- # set +x 00:22:33.103 11:50:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.103 11:50:06 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:33.103 11:50:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.103 11:50:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:33.377 11:50:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:33.377 11:50:06 -- common/autotest_common.sh@1187 -- # local i=0 00:22:33.377 11:50:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:33.377 11:50:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:33.377 11:50:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:35.287 11:50:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:35.287 11:50:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:35.287 11:50:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:22:35.287 11:50:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:35.287 11:50:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:35.287 11:50:08 -- common/autotest_common.sh@1197 -- # return 0 00:22:35.287 11:50:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.287 11:50:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:35.547 11:50:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:35.547 11:50:08 -- common/autotest_common.sh@1187 -- # local i=0 00:22:35.547 11:50:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:35.547 11:50:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:35.547 11:50:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:37.458 11:50:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:37.458 11:50:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:37.458 11:50:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:22:37.458 11:50:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:37.458 11:50:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:37.458 11:50:10 -- common/autotest_common.sh@1197 -- # return 0 00:22:37.458 11:50:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.458 11:50:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:37.718 11:50:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:37.718 11:50:10 -- common/autotest_common.sh@1187 -- # local i=0 00:22:37.718 11:50:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:37.718 11:50:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:37.718 11:50:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:39.628 11:50:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:39.628 11:50:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:39.628 11:50:12 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:22:39.888 11:50:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:39.888 11:50:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:39.888 11:50:12 -- common/autotest_common.sh@1197 -- # return 0 00:22:39.888 11:50:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.888 11:50:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:39.888 11:50:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:39.888 11:50:12 -- common/autotest_common.sh@1187 -- # local i=0 00:22:39.888 11:50:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:39.888 11:50:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:39.888 11:50:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:42.430 11:50:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:42.430 11:50:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:42.430 11:50:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:22:42.430 11:50:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:42.430 11:50:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:42.430 11:50:14 -- common/autotest_common.sh@1197 -- # return 0 00:22:42.430 11:50:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.430 11:50:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:42.430 11:50:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:42.430 11:50:15 -- common/autotest_common.sh@1187 -- # local i=0 00:22:42.430 11:50:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:42.430 11:50:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:42.430 11:50:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:44.369 11:50:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:44.369 11:50:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:44.369 11:50:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:22:44.369 11:50:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:44.369 11:50:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:44.369 11:50:17 -- common/autotest_common.sh@1197 -- # return 0 00:22:44.369 11:50:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.369 11:50:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:44.369 11:50:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:44.369 11:50:17 -- common/autotest_common.sh@1187 -- # local i=0 00:22:44.369 11:50:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:44.369 11:50:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:44.369 11:50:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:46.276 11:50:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:46.276 11:50:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:46.276 11:50:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:22:46.276 11:50:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:46.276 11:50:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:46.276 11:50:19 -- common/autotest_common.sh@1197 -- # return 0 00:22:46.276 11:50:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:46.276 11:50:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:46.536 11:50:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:46.536 11:50:19 -- common/autotest_common.sh@1187 -- # local i=0 00:22:46.536 11:50:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:46.536 11:50:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:46.536 11:50:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:48.446 11:50:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:48.446 11:50:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:48.446 11:50:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:22:48.706 11:50:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:48.706 11:50:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:48.706 11:50:21 -- common/autotest_common.sh@1197 -- # return 0 00:22:48.706 11:50:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.706 11:50:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:48.706 11:50:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:48.706 11:50:21 -- common/autotest_common.sh@1187 -- # local i=0 00:22:48.706 11:50:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:48.706 11:50:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:48.706 11:50:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:51.245 11:50:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:51.245 11:50:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:51.245 11:50:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:22:51.245 11:50:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:51.245 11:50:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.245 11:50:23 -- common/autotest_common.sh@1197 -- # return 0 00:22:51.245 11:50:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.245 11:50:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:51.245 11:50:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:51.245 11:50:23 -- common/autotest_common.sh@1187 -- # local i=0 00:22:51.245 11:50:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:51.245 11:50:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:51.245 11:50:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:53.155 11:50:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:53.155 11:50:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:53.155 11:50:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:22:53.155 11:50:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:53.155 11:50:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:53.155 11:50:25 -- common/autotest_common.sh@1197 -- # return 0 00:22:53.155 11:50:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.155 11:50:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:53.155 11:50:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:53.155 11:50:26 -- common/autotest_common.sh@1187 -- # local i=0 00:22:53.155 11:50:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:53.155 11:50:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:53.155 11:50:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:55.698 11:50:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:55.698 11:50:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:55.698 11:50:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:55.698 11:50:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:55.698 11:50:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:55.698 11:50:28 -- common/autotest_common.sh@1197 -- # return 0 00:22:55.698 11:50:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.698 11:50:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:55.698 11:50:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:55.698 11:50:28 -- common/autotest_common.sh@1187 -- # local i=0 00:22:55.698 11:50:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:55.698 11:50:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:55.698 11:50:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:57.609 11:50:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:57.609 11:50:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:57.609 11:50:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:57.609 11:50:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:57.609 11:50:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:57.609 11:50:30 -- common/autotest_common.sh@1197 -- # return 0 00:22:57.609 11:50:30 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:57.609 [global] 00:22:57.609 thread=1 00:22:57.609 invalidate=1 00:22:57.609 rw=read 00:22:57.609 time_based=1 00:22:57.609 runtime=10 00:22:57.609 ioengine=libaio 00:22:57.609 direct=1 00:22:57.609 bs=262144 00:22:57.609 iodepth=64 00:22:57.609 norandommap=1 00:22:57.609 numjobs=1 00:22:57.609 00:22:57.609 [job0] 00:22:57.609 filename=/dev/nvme0n1 00:22:57.609 [job1] 00:22:57.609 filename=/dev/nvme10n1 00:22:57.609 [job2] 00:22:57.609 filename=/dev/nvme1n1 00:22:57.609 [job3] 00:22:57.609 filename=/dev/nvme2n1 00:22:57.609 [job4] 00:22:57.609 filename=/dev/nvme3n1 00:22:57.609 [job5] 00:22:57.609 filename=/dev/nvme4n1 00:22:57.609 [job6] 00:22:57.609 filename=/dev/nvme5n1 00:22:57.609 [job7] 00:22:57.609 filename=/dev/nvme6n1 00:22:57.609 [job8] 00:22:57.609 filename=/dev/nvme7n1 00:22:57.609 [job9] 00:22:57.609 filename=/dev/nvme8n1 00:22:57.609 [job10] 00:22:57.609 filename=/dev/nvme9n1 00:22:57.869 Could not set queue depth (nvme0n1) 00:22:57.869 Could not set queue depth (nvme10n1) 00:22:57.869 Could not set queue depth (nvme1n1) 00:22:57.869 Could not set queue depth (nvme2n1) 00:22:57.869 Could not set queue depth (nvme3n1) 00:22:57.869 Could not set queue depth (nvme4n1) 00:22:57.869 Could not set queue depth (nvme5n1) 00:22:57.869 Could not set queue depth (nvme6n1) 00:22:57.869 Could not set queue depth (nvme7n1) 00:22:57.869 Could not set queue depth (nvme8n1) 00:22:57.869 Could not set queue depth (nvme9n1) 00:22:57.869 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:57.869 fio-3.35 00:22:57.869 Starting 11 threads 00:23:10.088 00:23:10.088 job0: (groupid=0, jobs=1): err= 0: pid=80617: Wed Nov 20 11:50:41 2024 00:23:10.088 read: IOPS=540, BW=135MiB/s (142MB/s)(1366MiB/10108msec) 00:23:10.088 slat (usec): min=11, max=133290, avg=1730.82, stdev=7029.29 00:23:10.088 clat (usec): min=1323, max=325953, avg=116433.25, stdev=58222.81 00:23:10.088 lat (usec): min=1344, max=365352, avg=118164.08, stdev=59461.97 00:23:10.088 clat percentiles (msec): 00:23:10.088 | 1.00th=[ 12], 5.00th=[ 21], 10.00th=[ 28], 20.00th=[ 59], 00:23:10.088 | 30.00th=[ 78], 40.00th=[ 96], 50.00th=[ 138], 60.00th=[ 150], 00:23:10.088 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 192], 00:23:10.088 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 255], 99.95th=[ 305], 00:23:10.088 | 99.99th=[ 326] 00:23:10.088 bw ( KiB/s): min=78336, max=437908, per=7.41%, avg=138159.75, stdev=84387.09, samples=20 00:23:10.088 iops : min= 306, max= 1710, avg=539.55, stdev=329.46, samples=20 00:23:10.088 lat (msec) : 2=0.20%, 4=0.05%, 10=0.37%, 20=4.26%, 50=13.69% 00:23:10.088 lat (msec) : 100=22.51%, 250=58.80%, 500=0.11% 00:23:10.088 cpu : usr=0.23%, sys=2.67%, ctx=1298, majf=0, minf=4097 00:23:10.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:10.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.088 issued rwts: total=5464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.088 job1: (groupid=0, jobs=1): err= 0: pid=80618: Wed Nov 20 11:50:41 2024 00:23:10.088 read: IOPS=635, BW=159MiB/s (167MB/s)(1607MiB/10115msec) 00:23:10.088 slat (usec): min=14, max=89375, avg=1459.51, stdev=6142.74 00:23:10.088 clat (usec): min=1007, max=263238, avg=99052.05, stdev=60008.90 00:23:10.088 lat (usec): min=1091, max=263269, avg=100511.56, stdev=61146.30 00:23:10.088 clat percentiles (msec): 00:23:10.088 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 40], 00:23:10.088 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 77], 60.00th=[ 134], 00:23:10.088 | 70.00th=[ 150], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 188], 00:23:10.088 | 99.00th=[ 228], 99.50th=[ 239], 99.90th=[ 257], 99.95th=[ 264], 00:23:10.088 | 99.99th=[ 264] 00:23:10.088 bw ( KiB/s): min=79360, max=466432, per=8.74%, avg=162931.30, stdev=109375.23, samples=20 00:23:10.088 iops : min= 310, max= 1822, avg=636.35, stdev=427.23, samples=20 00:23:10.088 lat (msec) : 2=0.02%, 10=0.09%, 20=0.79%, 50=30.76%, 100=23.90% 00:23:10.088 lat (msec) : 250=44.20%, 500=0.25% 00:23:10.088 cpu : usr=0.22%, sys=3.00%, ctx=1550, majf=0, minf=4097 00:23:10.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:10.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.088 issued rwts: total=6428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.088 job2: (groupid=0, jobs=1): err= 0: pid=80619: Wed Nov 20 11:50:41 2024 00:23:10.088 read: IOPS=441, BW=110MiB/s (116MB/s)(1116MiB/10107msec) 00:23:10.088 slat (usec): min=14, max=90923, avg=2157.92, stdev=7651.70 00:23:10.088 clat (msec): min=23, max=267, avg=142.54, stdev=40.24 00:23:10.088 lat (msec): min=25, max=285, avg=144.70, stdev=41.35 00:23:10.088 clat percentiles (msec): 00:23:10.088 | 1.00th=[ 56], 5.00th=[ 75], 10.00th=[ 84], 20.00th=[ 97], 00:23:10.088 | 30.00th=[ 129], 40.00th=[ 144], 50.00th=[ 150], 60.00th=[ 159], 00:23:10.088 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 201], 00:23:10.088 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 245], 99.95th=[ 245], 00:23:10.088 | 99.99th=[ 268] 00:23:10.088 bw ( KiB/s): min=73728, max=194048, per=6.04%, avg=112542.45, stdev=31267.45, samples=20 00:23:10.088 iops : min= 288, max= 758, avg=439.50, stdev=122.09, samples=20 00:23:10.088 lat (msec) : 50=0.94%, 100=20.87%, 250=78.17%, 500=0.02% 00:23:10.088 cpu : usr=0.15%, sys=2.38%, ctx=987, majf=0, minf=4097 00:23:10.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:10.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.088 issued rwts: total=4462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.088 job3: (groupid=0, jobs=1): err= 0: pid=80620: Wed Nov 20 11:50:41 2024 00:23:10.088 read: IOPS=947, BW=237MiB/s (248MB/s)(2398MiB/10119msec) 00:23:10.089 slat (usec): min=14, max=161805, avg=948.70, stdev=5889.50 00:23:10.089 clat (usec): min=1425, max=280565, avg=66417.56, stdev=57617.28 00:23:10.089 lat (usec): min=1506, max=341126, avg=67366.27, stdev=58682.67 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 21], 00:23:10.089 | 30.00th=[ 23], 40.00th=[ 27], 50.00th=[ 31], 60.00th=[ 56], 00:23:10.089 | 70.00th=[ 93], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 176], 00:23:10.089 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 264], 99.95th=[ 264], 00:23:10.089 | 99.99th=[ 279] 00:23:10.089 bw ( KiB/s): min=92672, max=713728, per=13.08%, avg=243666.15, stdev=216855.36, samples=20 00:23:10.089 iops : min= 362, max= 2788, avg=951.75, stdev=847.00, samples=20 00:23:10.089 lat (msec) : 2=0.14%, 4=0.25%, 10=0.49%, 20=16.24%, 50=41.33% 00:23:10.089 lat (msec) : 100=12.81%, 250=28.58%, 500=0.17% 00:23:10.089 cpu : usr=0.23%, sys=3.64%, ctx=2259, majf=0, minf=4097 00:23:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.089 issued rwts: total=9590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.089 job4: (groupid=0, jobs=1): err= 0: pid=80621: Wed Nov 20 11:50:41 2024 00:23:10.089 read: IOPS=718, BW=180MiB/s (188MB/s)(1806MiB/10045msec) 00:23:10.089 slat (usec): min=20, max=85877, avg=1364.63, stdev=5119.96 00:23:10.089 clat (msec): min=37, max=259, avg=87.49, stdev=35.90 00:23:10.089 lat (msec): min=37, max=308, avg=88.86, stdev=36.65 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 51], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 67], 00:23:10.089 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 79], 00:23:10.089 | 70.00th=[ 84], 80.00th=[ 97], 90.00th=[ 144], 95.00th=[ 169], 00:23:10.089 | 99.00th=[ 232], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 259], 00:23:10.089 | 99.99th=[ 259] 00:23:10.089 bw ( KiB/s): min=67584, max=237568, per=9.83%, avg=183135.80, stdev=58312.01, samples=20 00:23:10.089 iops : min= 264, max= 928, avg=715.15, stdev=227.77, samples=20 00:23:10.089 lat (msec) : 50=0.98%, 100=79.87%, 250=19.04%, 500=0.11% 00:23:10.089 cpu : usr=0.32%, sys=3.86%, ctx=1700, majf=0, minf=4097 00:23:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.089 issued rwts: total=7222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.089 job5: (groupid=0, jobs=1): err= 0: pid=80622: Wed Nov 20 11:50:41 2024 00:23:10.089 read: IOPS=437, BW=109MiB/s (115MB/s)(1107MiB/10119msec) 00:23:10.089 slat (usec): min=17, max=116034, avg=2131.12, stdev=7558.91 00:23:10.089 clat (msec): min=4, max=302, avg=143.90, stdev=50.80 00:23:10.089 lat (msec): min=4, max=342, avg=146.03, stdev=51.83 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 56], 20.00th=[ 112], 00:23:10.089 | 30.00th=[ 132], 40.00th=[ 148], 50.00th=[ 157], 60.00th=[ 165], 00:23:10.089 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 207], 00:23:10.089 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 268], 99.95th=[ 268], 00:23:10.089 | 99.99th=[ 305] 00:23:10.089 bw ( KiB/s): min=64512, max=305152, per=5.99%, avg=111644.60, stdev=49191.26, samples=20 00:23:10.089 iops : min= 252, max= 1192, avg=436.05, stdev=192.17, samples=20 00:23:10.089 lat (msec) : 10=0.43%, 20=1.83%, 50=6.51%, 100=9.29%, 250=81.52% 00:23:10.089 lat (msec) : 500=0.43% 00:23:10.089 cpu : usr=0.11%, sys=2.50%, ctx=1011, majf=0, minf=4097 00:23:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.089 issued rwts: total=4426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.089 job6: (groupid=0, jobs=1): err= 0: pid=80623: Wed Nov 20 11:50:41 2024 00:23:10.089 read: IOPS=705, BW=176MiB/s (185MB/s)(1773MiB/10060msec) 00:23:10.089 slat (usec): min=14, max=145924, avg=1405.72, stdev=6116.92 00:23:10.089 clat (msec): min=22, max=263, avg=89.20, stdev=35.86 00:23:10.089 lat (msec): min=22, max=369, avg=90.60, stdev=36.78 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 51], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 66], 00:23:10.089 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 82], 00:23:10.089 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 142], 95.00th=[ 171], 00:23:10.089 | 99.00th=[ 222], 99.50th=[ 226], 99.90th=[ 255], 99.95th=[ 259], 00:23:10.089 | 99.99th=[ 264] 00:23:10.089 bw ( KiB/s): min=79360, max=247296, per=9.65%, avg=179871.50, stdev=55596.75, samples=20 00:23:10.089 iops : min= 310, max= 966, avg=702.55, stdev=217.14, samples=20 00:23:10.089 lat (msec) : 50=0.79%, 100=76.60%, 250=22.49%, 500=0.13% 00:23:10.089 cpu : usr=0.16%, sys=2.82%, ctx=1491, majf=0, minf=4097 00:23:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.089 issued rwts: total=7093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.089 job7: (groupid=0, jobs=1): err= 0: pid=80624: Wed Nov 20 11:50:41 2024 00:23:10.089 read: IOPS=609, BW=152MiB/s (160MB/s)(1530MiB/10037msec) 00:23:10.089 slat (usec): min=14, max=91507, avg=1553.37, stdev=6290.88 00:23:10.089 clat (usec): min=1292, max=285418, avg=103145.59, stdev=60274.41 00:23:10.089 lat (usec): min=1361, max=300620, avg=104698.96, stdev=61392.15 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 3], 5.00th=[ 28], 10.00th=[ 41], 20.00th=[ 50], 00:23:10.089 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 77], 60.00th=[ 128], 00:23:10.089 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 197], 00:23:10.089 | 99.00th=[ 226], 99.50th=[ 239], 99.90th=[ 271], 99.95th=[ 271], 00:23:10.089 | 99.99th=[ 288] 00:23:10.089 bw ( KiB/s): min=77824, max=308119, per=8.32%, avg=155002.40, stdev=79259.68, samples=20 00:23:10.089 iops : min= 304, max= 1203, avg=605.40, stdev=309.57, samples=20 00:23:10.089 lat (msec) : 2=0.33%, 4=1.16%, 10=0.54%, 20=1.29%, 50=17.09% 00:23:10.089 lat (msec) : 100=36.17%, 250=43.08%, 500=0.34% 00:23:10.089 cpu : usr=0.25%, sys=2.98%, ctx=1252, majf=0, minf=4097 00:23:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.089 issued rwts: total=6121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.089 job8: (groupid=0, jobs=1): err= 0: pid=80625: Wed Nov 20 11:50:41 2024 00:23:10.089 read: IOPS=704, BW=176MiB/s (185MB/s)(1770MiB/10057msec) 00:23:10.089 slat (usec): min=12, max=102074, avg=1380.41, stdev=5003.59 00:23:10.089 clat (msec): min=23, max=275, avg=89.32, stdev=36.07 00:23:10.089 lat (msec): min=23, max=322, avg=90.70, stdev=36.83 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 67], 00:23:10.089 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 82], 00:23:10.089 | 70.00th=[ 89], 80.00th=[ 104], 90.00th=[ 142], 95.00th=[ 169], 00:23:10.089 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 253], 99.95th=[ 271], 00:23:10.089 | 99.99th=[ 275] 00:23:10.089 bw ( KiB/s): min=66560, max=250880, per=9.64%, avg=179630.25, stdev=57408.92, samples=20 00:23:10.089 iops : min= 260, max= 980, avg=701.55, stdev=224.32, samples=20 00:23:10.089 lat (msec) : 50=1.57%, 100=77.09%, 250=21.04%, 500=0.30% 00:23:10.089 cpu : usr=0.28%, sys=3.76%, ctx=1938, majf=0, minf=4097 00:23:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.089 issued rwts: total=7081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.089 job9: (groupid=0, jobs=1): err= 0: pid=80626: Wed Nov 20 11:50:41 2024 00:23:10.089 read: IOPS=1024, BW=256MiB/s (269MB/s)(2571MiB/10036msec) 00:23:10.089 slat (usec): min=11, max=115082, avg=945.68, stdev=4388.44 00:23:10.089 clat (msec): min=3, max=276, avg=61.36, stdev=52.68 00:23:10.089 lat (msec): min=3, max=281, avg=62.30, stdev=53.55 00:23:10.089 clat percentiles (msec): 00:23:10.089 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 18], 20.00th=[ 21], 00:23:10.089 | 30.00th=[ 25], 40.00th=[ 29], 50.00th=[ 43], 60.00th=[ 53], 00:23:10.089 | 70.00th=[ 69], 80.00th=[ 97], 90.00th=[ 155], 95.00th=[ 176], 00:23:10.090 | 99.00th=[ 211], 99.50th=[ 230], 99.90th=[ 253], 99.95th=[ 253], 00:23:10.090 | 99.99th=[ 271] 00:23:10.090 bw ( KiB/s): min=80896, max=729600, per=14.04%, avg=261599.70, stdev=213015.88, samples=20 00:23:10.090 iops : min= 316, max= 2850, avg=1021.80, stdev=832.12, samples=20 00:23:10.090 lat (msec) : 4=0.03%, 10=1.49%, 20=16.67%, 50=39.25%, 100=23.15% 00:23:10.090 lat (msec) : 250=19.23%, 500=0.18% 00:23:10.090 cpu : usr=0.25%, sys=3.57%, ctx=1992, majf=0, minf=4097 00:23:10.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:10.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.090 issued rwts: total=10285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.090 job10: (groupid=0, jobs=1): err= 0: pid=80627: Wed Nov 20 11:50:41 2024 00:23:10.090 read: IOPS=542, BW=136MiB/s (142MB/s)(1372MiB/10116msec) 00:23:10.090 slat (usec): min=15, max=72877, avg=1764.11, stdev=6180.14 00:23:10.090 clat (msec): min=10, max=241, avg=115.91, stdev=51.39 00:23:10.090 lat (msec): min=10, max=255, avg=117.68, stdev=52.44 00:23:10.090 clat percentiles (msec): 00:23:10.090 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 59], 00:23:10.090 | 30.00th=[ 71], 40.00th=[ 89], 50.00th=[ 124], 60.00th=[ 148], 00:23:10.090 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 188], 00:23:10.090 | 99.00th=[ 203], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 243], 00:23:10.090 | 99.99th=[ 243] 00:23:10.090 bw ( KiB/s): min=86528, max=295776, per=7.44%, avg=138722.90, stdev=62794.67, samples=20 00:23:10.090 iops : min= 338, max= 1155, avg=541.80, stdev=245.25, samples=20 00:23:10.090 lat (msec) : 20=0.35%, 50=8.80%, 100=34.71%, 250=56.14% 00:23:10.090 cpu : usr=0.17%, sys=2.90%, ctx=1202, majf=0, minf=4097 00:23:10.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:10.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:10.090 issued rwts: total=5486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.090 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:10.090 00:23:10.090 Run status group 0 (all jobs): 00:23:10.090 READ: bw=1820MiB/s (1908MB/s), 109MiB/s-256MiB/s (115MB/s-269MB/s), io=18.0GiB (19.3GB), run=10036-10119msec 00:23:10.090 00:23:10.090 Disk stats (read/write): 00:23:10.090 nvme0n1: ios=10867/0, merge=0/0, ticks=1238429/0, in_queue=1238429, util=97.91% 00:23:10.090 nvme10n1: ios=12795/0, merge=0/0, ticks=1236658/0, in_queue=1236658, util=97.91% 00:23:10.090 nvme1n1: ios=8819/0, merge=0/0, ticks=1241989/0, in_queue=1241989, util=98.22% 00:23:10.090 nvme2n1: ios=19128/0, merge=0/0, ticks=1230899/0, in_queue=1230899, util=98.00% 00:23:10.090 nvme3n1: ios=14026/0, merge=0/0, ticks=1209179/0, in_queue=1209179, util=98.07% 00:23:10.090 nvme4n1: ios=8779/0, merge=0/0, ticks=1244456/0, in_queue=1244456, util=98.35% 00:23:10.090 nvme5n1: ios=13797/0, merge=0/0, ticks=1215122/0, in_queue=1215122, util=98.28% 00:23:10.090 nvme6n1: ios=11797/0, merge=0/0, ticks=1216024/0, in_queue=1216024, util=98.07% 00:23:10.090 nvme7n1: ios=13764/0, merge=0/0, ticks=1211007/0, in_queue=1211007, util=98.41% 00:23:10.090 nvme8n1: ios=20010/0, merge=0/0, ticks=1206576/0, in_queue=1206576, util=98.41% 00:23:10.090 nvme9n1: ios=10916/0, merge=0/0, ticks=1242423/0, in_queue=1242423, util=98.70% 00:23:10.090 11:50:41 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:10.090 [global] 00:23:10.090 thread=1 00:23:10.090 invalidate=1 00:23:10.090 rw=randwrite 00:23:10.090 time_based=1 00:23:10.090 runtime=10 00:23:10.090 ioengine=libaio 00:23:10.090 direct=1 00:23:10.090 bs=262144 00:23:10.090 iodepth=64 00:23:10.090 norandommap=1 00:23:10.090 numjobs=1 00:23:10.090 00:23:10.090 [job0] 00:23:10.090 filename=/dev/nvme0n1 00:23:10.090 [job1] 00:23:10.090 filename=/dev/nvme10n1 00:23:10.090 [job2] 00:23:10.090 filename=/dev/nvme1n1 00:23:10.090 [job3] 00:23:10.090 filename=/dev/nvme2n1 00:23:10.090 [job4] 00:23:10.090 filename=/dev/nvme3n1 00:23:10.090 [job5] 00:23:10.090 filename=/dev/nvme4n1 00:23:10.090 [job6] 00:23:10.090 filename=/dev/nvme5n1 00:23:10.090 [job7] 00:23:10.090 filename=/dev/nvme6n1 00:23:10.090 [job8] 00:23:10.090 filename=/dev/nvme7n1 00:23:10.090 [job9] 00:23:10.090 filename=/dev/nvme8n1 00:23:10.090 [job10] 00:23:10.090 filename=/dev/nvme9n1 00:23:10.090 Could not set queue depth (nvme0n1) 00:23:10.090 Could not set queue depth (nvme10n1) 00:23:10.090 Could not set queue depth (nvme1n1) 00:23:10.090 Could not set queue depth (nvme2n1) 00:23:10.090 Could not set queue depth (nvme3n1) 00:23:10.090 Could not set queue depth (nvme4n1) 00:23:10.090 Could not set queue depth (nvme5n1) 00:23:10.090 Could not set queue depth (nvme6n1) 00:23:10.090 Could not set queue depth (nvme7n1) 00:23:10.090 Could not set queue depth (nvme8n1) 00:23:10.090 Could not set queue depth (nvme9n1) 00:23:10.090 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:10.090 fio-3.35 00:23:10.090 Starting 11 threads 00:23:20.087 00:23:20.087 job0: (groupid=0, jobs=1): err= 0: pid=80830: Wed Nov 20 11:50:51 2024 00:23:20.087 write: IOPS=558, BW=140MiB/s (146MB/s)(1411MiB/10103msec); 0 zone resets 00:23:20.087 slat (usec): min=19, max=26229, avg=1755.52, stdev=3056.66 00:23:20.087 clat (msec): min=4, max=228, avg=112.74, stdev=20.73 00:23:20.087 lat (msec): min=4, max=229, avg=114.50, stdev=20.85 00:23:20.087 clat percentiles (msec): 00:23:20.087 | 1.00th=[ 62], 5.00th=[ 68], 10.00th=[ 90], 20.00th=[ 94], 00:23:20.087 | 30.00th=[ 99], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 126], 00:23:20.087 | 70.00th=[ 126], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:23:20.087 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 222], 99.95th=[ 222], 00:23:20.087 | 99.99th=[ 230] 00:23:20.087 bw ( KiB/s): min=125189, max=209920, per=7.08%, avg=142857.25, stdev=21015.21, samples=20 00:23:20.087 iops : min= 489, max= 820, avg=557.95, stdev=82.08, samples=20 00:23:20.087 lat (msec) : 10=0.04%, 20=0.14%, 50=0.35%, 100=30.75%, 250=68.72% 00:23:20.087 cpu : usr=1.44%, sys=1.06%, ctx=8182, majf=0, minf=1 00:23:20.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:20.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.087 issued rwts: total=0,5645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.087 job1: (groupid=0, jobs=1): err= 0: pid=80831: Wed Nov 20 11:50:51 2024 00:23:20.087 write: IOPS=1729, BW=432MiB/s (453MB/s)(4339MiB/10032msec); 0 zone resets 00:23:20.087 slat (usec): min=21, max=10035, avg=566.37, stdev=965.15 00:23:20.087 clat (msec): min=5, max=146, avg=36.42, stdev= 9.99 00:23:20.087 lat (msec): min=5, max=146, avg=36.99, stdev=10.12 00:23:20.087 clat percentiles (msec): 00:23:20.087 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:23:20.087 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:23:20.087 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 37], 95.00th=[ 64], 00:23:20.087 | 99.00th=[ 91], 99.50th=[ 96], 99.90th=[ 120], 99.95th=[ 133], 00:23:20.087 | 99.99th=[ 144] 00:23:20.087 bw ( KiB/s): min=237568, max=485888, per=21.93%, avg=442604.40, stdev=65047.57, samples=20 00:23:20.087 iops : min= 928, max= 1898, avg=1728.90, stdev=254.09, samples=20 00:23:20.087 lat (msec) : 10=0.06%, 20=1.12%, 50=93.00%, 100=5.42%, 250=0.40% 00:23:20.087 cpu : usr=4.12%, sys=2.53%, ctx=24854, majf=0, minf=1 00:23:20.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:20.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.087 issued rwts: total=0,17354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.087 job2: (groupid=0, jobs=1): err= 0: pid=80832: Wed Nov 20 11:50:51 2024 00:23:20.087 write: IOPS=563, BW=141MiB/s (148MB/s)(1423MiB/10106msec); 0 zone resets 00:23:20.087 slat (usec): min=23, max=12104, avg=1692.57, stdev=3019.25 00:23:20.087 clat (usec): min=1764, max=225283, avg=111922.98, stdev=22824.01 00:23:20.087 lat (usec): min=1836, max=225382, avg=113615.56, stdev=23093.78 00:23:20.087 clat percentiles (msec): 00:23:20.087 | 1.00th=[ 29], 5.00th=[ 65], 10.00th=[ 90], 20.00th=[ 96], 00:23:20.087 | 30.00th=[ 106], 40.00th=[ 117], 50.00th=[ 123], 60.00th=[ 124], 00:23:20.087 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 127], 95.00th=[ 131], 00:23:20.087 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 218], 99.95th=[ 218], 00:23:20.087 | 99.99th=[ 226] 00:23:20.087 bw ( KiB/s): min=121856, max=214957, per=7.14%, avg=144059.70, stdev=23924.23, samples=20 00:23:20.087 iops : min= 476, max= 839, avg=562.65, stdev=93.38, samples=20 00:23:20.087 lat (msec) : 2=0.02%, 4=0.05%, 10=0.33%, 20=0.11%, 50=2.07% 00:23:20.087 lat (msec) : 100=26.22%, 250=71.20% 00:23:20.087 cpu : usr=1.38%, sys=1.49%, ctx=8171, majf=0, minf=1 00:23:20.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:20.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.087 issued rwts: total=0,5690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.087 job3: (groupid=0, jobs=1): err= 0: pid=80837: Wed Nov 20 11:50:51 2024 00:23:20.087 write: IOPS=518, BW=130MiB/s (136MB/s)(1310MiB/10105msec); 0 zone resets 00:23:20.087 slat (usec): min=21, max=45326, avg=1904.83, stdev=3267.14 00:23:20.087 clat (msec): min=14, max=222, avg=121.51, stdev=15.70 00:23:20.087 lat (msec): min=14, max=222, avg=123.42, stdev=15.62 00:23:20.087 clat percentiles (msec): 00:23:20.087 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 116], 00:23:20.087 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 125], 00:23:20.087 | 70.00th=[ 126], 80.00th=[ 127], 90.00th=[ 132], 95.00th=[ 148], 00:23:20.087 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 215], 99.95th=[ 215], 00:23:20.087 | 99.99th=[ 224] 00:23:20.087 bw ( KiB/s): min=104448, max=175104, per=6.56%, avg=132452.80, stdev=14033.56, samples=20 00:23:20.087 iops : min= 408, max= 684, avg=517.20, stdev=54.89, samples=20 00:23:20.087 lat (msec) : 20=0.08%, 50=0.23%, 100=11.42%, 250=88.28% 00:23:20.087 cpu : usr=1.17%, sys=2.12%, ctx=7347, majf=0, minf=1 00:23:20.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:20.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.087 issued rwts: total=0,5238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.087 job4: (groupid=0, jobs=1): err= 0: pid=80840: Wed Nov 20 11:50:51 2024 00:23:20.087 write: IOPS=806, BW=202MiB/s (211MB/s)(2029MiB/10058msec); 0 zone resets 00:23:20.087 slat (usec): min=23, max=11665, avg=1217.19, stdev=2154.08 00:23:20.087 clat (msec): min=5, max=132, avg=78.08, stdev=22.53 00:23:20.087 lat (msec): min=5, max=132, avg=79.30, stdev=22.83 00:23:20.087 clat percentiles (msec): 00:23:20.087 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 63], 00:23:20.087 | 30.00th=[ 65], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 67], 00:23:20.087 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 127], 00:23:20.087 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 133], 00:23:20.087 | 99.99th=[ 133] 00:23:20.087 bw ( KiB/s): min=128512, max=256512, per=10.21%, avg=206058.80, stdev=50875.42, samples=20 00:23:20.087 iops : min= 502, max= 1002, avg=804.90, stdev=198.73, samples=20 00:23:20.087 lat (msec) : 10=0.10%, 20=0.30%, 50=0.89%, 100=84.02%, 250=14.70% 00:23:20.087 cpu : usr=1.99%, sys=1.66%, ctx=11863, majf=0, minf=1 00:23:20.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:20.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.087 issued rwts: total=0,8114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.087 job5: (groupid=0, jobs=1): err= 0: pid=80846: Wed Nov 20 11:50:51 2024 00:23:20.087 write: IOPS=522, BW=131MiB/s (137MB/s)(1320MiB/10104msec); 0 zone resets 00:23:20.087 slat (usec): min=17, max=47523, avg=1867.30, stdev=3288.82 00:23:20.087 clat (msec): min=4, max=225, avg=120.54, stdev=19.27 00:23:20.087 lat (msec): min=4, max=225, avg=122.41, stdev=19.36 00:23:20.087 clat percentiles (msec): 00:23:20.087 | 1.00th=[ 37], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 117], 00:23:20.087 | 30.00th=[ 120], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:23:20.087 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 129], 95.00th=[ 148], 00:23:20.087 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 218], 99.95th=[ 218], 00:23:20.087 | 99.99th=[ 226] 00:23:20.087 bw ( KiB/s): min=103936, max=174592, per=6.62%, avg=133512.85, stdev=15639.97, samples=20 00:23:20.087 iops : min= 406, max= 682, avg=521.45, stdev=61.03, samples=20 00:23:20.087 lat (msec) : 10=0.08%, 20=0.36%, 50=0.97%, 100=12.63%, 250=85.97% 00:23:20.087 cpu : usr=1.44%, sys=1.30%, ctx=7109, majf=0, minf=1 00:23:20.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:20.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.087 issued rwts: total=0,5280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.088 job6: (groupid=0, jobs=1): err= 0: pid=80847: Wed Nov 20 11:50:51 2024 00:23:20.088 write: IOPS=549, BW=137MiB/s (144MB/s)(1388MiB/10102msec); 0 zone resets 00:23:20.088 slat (usec): min=18, max=34208, avg=1742.49, stdev=3119.66 00:23:20.088 clat (msec): min=19, max=224, avg=114.64, stdev=23.03 00:23:20.088 lat (msec): min=19, max=224, avg=116.38, stdev=23.30 00:23:20.088 clat percentiles (msec): 00:23:20.088 | 1.00th=[ 29], 5.00th=[ 66], 10.00th=[ 91], 20.00th=[ 97], 00:23:20.088 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 124], 00:23:20.088 | 70.00th=[ 125], 80.00th=[ 126], 90.00th=[ 129], 95.00th=[ 142], 00:23:20.088 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 218], 99.95th=[ 218], 00:23:20.088 | 99.99th=[ 224] 00:23:20.088 bw ( KiB/s): min=116736, max=189819, per=6.96%, avg=140534.90, stdev=19211.62, samples=20 00:23:20.088 iops : min= 456, max= 741, avg=548.75, stdev=75.09, samples=20 00:23:20.088 lat (msec) : 20=0.07%, 50=2.72%, 100=20.42%, 250=76.79% 00:23:20.088 cpu : usr=1.35%, sys=1.28%, ctx=7798, majf=0, minf=1 00:23:20.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:20.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.088 issued rwts: total=0,5553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.088 job7: (groupid=0, jobs=1): err= 0: pid=80848: Wed Nov 20 11:50:51 2024 00:23:20.088 write: IOPS=523, BW=131MiB/s (137MB/s)(1323MiB/10101msec); 0 zone resets 00:23:20.088 slat (usec): min=16, max=27604, avg=1862.31, stdev=3200.51 00:23:20.088 clat (msec): min=21, max=221, avg=120.28, stdev=15.65 00:23:20.088 lat (msec): min=21, max=221, avg=122.14, stdev=15.65 00:23:20.088 clat percentiles (msec): 00:23:20.088 | 1.00th=[ 75], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 116], 00:23:20.088 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 125], 00:23:20.088 | 70.00th=[ 125], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 146], 00:23:20.088 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 213], 99.95th=[ 215], 00:23:20.088 | 99.99th=[ 222] 00:23:20.088 bw ( KiB/s): min=113152, max=172544, per=6.63%, avg=133810.70, stdev=12463.19, samples=20 00:23:20.088 iops : min= 442, max= 674, avg=522.60, stdev=48.71, samples=20 00:23:20.088 lat (msec) : 50=0.30%, 100=12.81%, 250=86.88% 00:23:20.088 cpu : usr=1.37%, sys=1.91%, ctx=7391, majf=0, minf=1 00:23:20.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:20.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.088 issued rwts: total=0,5291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.088 job8: (groupid=0, jobs=1): err= 0: pid=80849: Wed Nov 20 11:50:51 2024 00:23:20.088 write: IOPS=888, BW=222MiB/s (233MB/s)(2232MiB/10052msec); 0 zone resets 00:23:20.088 slat (usec): min=15, max=27109, avg=1103.19, stdev=1938.07 00:23:20.088 clat (msec): min=29, max=168, avg=70.94, stdev=16.35 00:23:20.088 lat (msec): min=29, max=168, avg=72.05, stdev=16.51 00:23:20.088 clat percentiles (msec): 00:23:20.088 | 1.00th=[ 61], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 63], 00:23:20.088 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 65], 60.00th=[ 66], 00:23:20.088 | 70.00th=[ 66], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 95], 00:23:20.088 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:23:20.088 | 99.99th=[ 169] 00:23:20.088 bw ( KiB/s): min=110592, max=254976, per=11.24%, avg=226843.10, stdev=42275.14, samples=20 00:23:20.088 iops : min= 432, max= 996, avg=886.10, stdev=165.13, samples=20 00:23:20.088 lat (msec) : 50=0.15%, 100=96.96%, 250=2.89% 00:23:20.088 cpu : usr=2.13%, sys=1.31%, ctx=13594, majf=0, minf=1 00:23:20.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:20.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.088 issued rwts: total=0,8927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.088 job9: (groupid=0, jobs=1): err= 0: pid=80850: Wed Nov 20 11:50:51 2024 00:23:20.088 write: IOPS=663, BW=166MiB/s (174MB/s)(1676MiB/10108msec); 0 zone resets 00:23:20.088 slat (usec): min=15, max=15621, avg=1476.73, stdev=2633.95 00:23:20.088 clat (msec): min=4, max=228, avg=95.00, stdev=27.71 00:23:20.088 lat (msec): min=4, max=228, avg=96.47, stdev=28.04 00:23:20.088 clat percentiles (msec): 00:23:20.088 | 1.00th=[ 61], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:23:20.088 | 30.00th=[ 66], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 100], 00:23:20.088 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 127], 95.00th=[ 128], 00:23:20.088 | 99.00th=[ 142], 99.50th=[ 165], 99.90th=[ 213], 99.95th=[ 222], 00:23:20.088 | 99.99th=[ 228] 00:23:20.088 bw ( KiB/s): min=127233, max=254976, per=8.42%, avg=169971.25, stdev=48999.19, samples=20 00:23:20.088 iops : min= 497, max= 996, avg=663.95, stdev=191.40, samples=20 00:23:20.088 lat (msec) : 10=0.07%, 20=0.06%, 50=0.58%, 100=59.54%, 250=39.74% 00:23:20.088 cpu : usr=1.49%, sys=2.01%, ctx=8476, majf=0, minf=1 00:23:20.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:20.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.088 issued rwts: total=0,6703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.088 job10: (groupid=0, jobs=1): err= 0: pid=80851: Wed Nov 20 11:50:51 2024 00:23:20.088 write: IOPS=583, BW=146MiB/s (153MB/s)(1475MiB/10110msec); 0 zone resets 00:23:20.088 slat (usec): min=22, max=25092, avg=1637.96, stdev=2934.54 00:23:20.088 clat (msec): min=2, max=236, avg=107.91, stdev=26.75 00:23:20.088 lat (msec): min=2, max=236, avg=109.55, stdev=27.07 00:23:20.088 clat percentiles (msec): 00:23:20.088 | 1.00th=[ 36], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 82], 00:23:20.088 | 30.00th=[ 96], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 126], 00:23:20.088 | 70.00th=[ 126], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:23:20.088 | 99.00th=[ 148], 99.50th=[ 180], 99.90th=[ 228], 99.95th=[ 228], 00:23:20.088 | 99.99th=[ 236] 00:23:20.088 bw ( KiB/s): min=125440, max=240640, per=7.40%, avg=149350.25, stdev=32866.10, samples=20 00:23:20.088 iops : min= 490, max= 940, avg=583.35, stdev=128.42, samples=20 00:23:20.088 lat (msec) : 4=0.05%, 10=0.12%, 20=0.07%, 50=1.98%, 100=30.26% 00:23:20.088 lat (msec) : 250=67.51% 00:23:20.088 cpu : usr=1.53%, sys=1.93%, ctx=7967, majf=0, minf=1 00:23:20.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:20.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:20.088 issued rwts: total=0,5898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:20.088 00:23:20.088 Run status group 0 (all jobs): 00:23:20.088 WRITE: bw=1971MiB/s (2066MB/s), 130MiB/s-432MiB/s (136MB/s-453MB/s), io=19.5GiB (20.9GB), run=10032-10110msec 00:23:20.088 00:23:20.088 Disk stats (read/write): 00:23:20.088 nvme0n1: ios=50/11211, merge=0/0, ticks=35/1220280, in_queue=1220315, util=98.33% 00:23:20.088 nvme10n1: ios=49/33848, merge=0/0, ticks=41/1196574, in_queue=1196615, util=98.47% 00:23:20.088 nvme1n1: ios=49/11299, merge=0/0, ticks=31/1221475, in_queue=1221506, util=98.48% 00:23:20.088 nvme2n1: ios=49/10386, merge=0/0, ticks=45/1219263, in_queue=1219308, util=98.52% 00:23:20.088 nvme3n1: ios=48/16180, merge=0/0, ticks=25/1223685, in_queue=1223710, util=98.52% 00:23:20.088 nvme4n1: ios=31/10473, merge=0/0, ticks=12/1220793, in_queue=1220805, util=98.56% 00:23:20.088 nvme5n1: ios=29/11020, merge=0/0, ticks=13/1221192, in_queue=1221205, util=98.57% 00:23:20.088 nvme6n1: ios=21/10492, merge=0/0, ticks=15/1220062, in_queue=1220077, util=98.54% 00:23:20.088 nvme7n1: ios=0/17789, merge=0/0, ticks=0/1223314, in_queue=1223314, util=98.62% 00:23:20.088 nvme8n1: ios=0/13322, merge=0/0, ticks=0/1220931, in_queue=1220931, util=98.81% 00:23:20.088 nvme9n1: ios=0/11738, merge=0/0, ticks=0/1222891, in_queue=1222891, util=98.93% 00:23:20.088 11:50:51 -- target/multiconnection.sh@36 -- # sync 00:23:20.088 11:50:51 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:20.088 11:50:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.088 11:50:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:20.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:20.088 11:50:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:20.088 11:50:51 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.088 11:50:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.088 11:50:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:23:20.088 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.088 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:23:20.088 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.088 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.088 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.088 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.088 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.088 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.088 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:20.088 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:20.088 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:20.088 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.088 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.088 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:23:20.088 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.088 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:23:20.088 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.088 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:20.088 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.088 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.088 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.088 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.088 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:20.088 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:20.088 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:20.089 11:50:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:20.089 11:50:52 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:23:20.089 11:50:52 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:20.089 11:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.089 11:50:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:20.089 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:20.089 11:50:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:20.089 11:50:53 -- common/autotest_common.sh@1208 -- # local i=0 00:23:20.089 11:50:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:20.089 11:50:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:23:20.089 11:50:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:20.089 11:50:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:23:20.089 11:50:53 -- common/autotest_common.sh@1220 -- # return 0 00:23:20.089 11:50:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:20.089 11:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 11:50:53 -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 11:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 11:50:53 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:20.089 11:50:53 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:20.089 11:50:53 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:20.089 11:50:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:20.089 11:50:53 -- nvmf/common.sh@116 -- # sync 00:23:20.089 11:50:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:20.089 11:50:53 -- nvmf/common.sh@119 -- # set +e 00:23:20.089 11:50:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:20.089 11:50:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:20.089 rmmod nvme_tcp 00:23:20.089 rmmod nvme_fabrics 00:23:20.089 rmmod nvme_keyring 00:23:20.349 11:50:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:20.349 11:50:53 -- nvmf/common.sh@123 -- # set -e 00:23:20.349 11:50:53 -- nvmf/common.sh@124 -- # return 0 00:23:20.349 11:50:53 -- nvmf/common.sh@477 -- # '[' -n 80139 ']' 00:23:20.349 11:50:53 -- nvmf/common.sh@478 -- # killprocess 80139 00:23:20.349 11:50:53 -- common/autotest_common.sh@936 -- # '[' -z 80139 ']' 00:23:20.349 11:50:53 -- common/autotest_common.sh@940 -- # kill -0 80139 00:23:20.349 11:50:53 -- common/autotest_common.sh@941 -- # uname 00:23:20.349 11:50:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.349 11:50:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80139 00:23:20.349 11:50:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.349 11:50:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.349 killing process with pid 80139 00:23:20.349 11:50:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80139' 00:23:20.349 11:50:53 -- common/autotest_common.sh@955 -- # kill 80139 00:23:20.349 11:50:53 -- common/autotest_common.sh@960 -- # wait 80139 00:23:20.609 11:50:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:20.609 11:50:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:20.609 11:50:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:20.609 11:50:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.609 11:50:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:20.609 11:50:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.609 11:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.609 11:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.868 11:50:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:20.868 00:23:20.868 real 0m49.930s 00:23:20.868 user 2m52.453s 00:23:20.868 sys 0m23.575s 00:23:20.868 11:50:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:20.868 11:50:53 -- common/autotest_common.sh@10 -- # set +x 00:23:20.868 ************************************ 00:23:20.868 END TEST nvmf_multiconnection 00:23:20.868 ************************************ 00:23:20.868 11:50:53 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:20.868 11:50:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:20.868 11:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.868 11:50:53 -- common/autotest_common.sh@10 -- # set +x 00:23:20.868 ************************************ 00:23:20.868 START TEST nvmf_initiator_timeout 00:23:20.868 ************************************ 00:23:20.868 11:50:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:20.868 * Looking for test storage... 00:23:20.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:20.868 11:50:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:20.868 11:50:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:20.868 11:50:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:21.128 11:50:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:21.128 11:50:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:21.128 11:50:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:21.128 11:50:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:21.128 11:50:53 -- scripts/common.sh@335 -- # IFS=.-: 00:23:21.128 11:50:53 -- scripts/common.sh@335 -- # read -ra ver1 00:23:21.128 11:50:53 -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.128 11:50:53 -- scripts/common.sh@336 -- # read -ra ver2 00:23:21.128 11:50:53 -- scripts/common.sh@337 -- # local 'op=<' 00:23:21.128 11:50:53 -- scripts/common.sh@339 -- # ver1_l=2 00:23:21.128 11:50:53 -- scripts/common.sh@340 -- # ver2_l=1 00:23:21.128 11:50:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:21.128 11:50:53 -- scripts/common.sh@343 -- # case "$op" in 00:23:21.128 11:50:53 -- scripts/common.sh@344 -- # : 1 00:23:21.128 11:50:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:21.128 11:50:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.128 11:50:53 -- scripts/common.sh@364 -- # decimal 1 00:23:21.128 11:50:53 -- scripts/common.sh@352 -- # local d=1 00:23:21.128 11:50:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.128 11:50:53 -- scripts/common.sh@354 -- # echo 1 00:23:21.128 11:50:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:21.128 11:50:53 -- scripts/common.sh@365 -- # decimal 2 00:23:21.128 11:50:53 -- scripts/common.sh@352 -- # local d=2 00:23:21.128 11:50:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.128 11:50:53 -- scripts/common.sh@354 -- # echo 2 00:23:21.128 11:50:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:21.128 11:50:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:21.128 11:50:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:21.128 11:50:53 -- scripts/common.sh@367 -- # return 0 00:23:21.128 11:50:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.128 11:50:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:21.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.128 --rc genhtml_branch_coverage=1 00:23:21.128 --rc genhtml_function_coverage=1 00:23:21.128 --rc genhtml_legend=1 00:23:21.128 --rc geninfo_all_blocks=1 00:23:21.128 --rc geninfo_unexecuted_blocks=1 00:23:21.128 00:23:21.128 ' 00:23:21.128 11:50:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:21.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.128 --rc genhtml_branch_coverage=1 00:23:21.128 --rc genhtml_function_coverage=1 00:23:21.128 --rc genhtml_legend=1 00:23:21.128 --rc geninfo_all_blocks=1 00:23:21.128 --rc geninfo_unexecuted_blocks=1 00:23:21.129 00:23:21.129 ' 00:23:21.129 11:50:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:21.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.129 --rc genhtml_branch_coverage=1 00:23:21.129 --rc genhtml_function_coverage=1 00:23:21.129 --rc genhtml_legend=1 00:23:21.129 --rc geninfo_all_blocks=1 00:23:21.129 --rc geninfo_unexecuted_blocks=1 00:23:21.129 00:23:21.129 ' 00:23:21.129 11:50:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:21.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.129 --rc genhtml_branch_coverage=1 00:23:21.129 --rc genhtml_function_coverage=1 00:23:21.129 --rc genhtml_legend=1 00:23:21.129 --rc geninfo_all_blocks=1 00:23:21.129 --rc geninfo_unexecuted_blocks=1 00:23:21.129 00:23:21.129 ' 00:23:21.129 11:50:53 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.129 11:50:53 -- nvmf/common.sh@7 -- # uname -s 00:23:21.129 11:50:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.129 11:50:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.129 11:50:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.129 11:50:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.129 11:50:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.129 11:50:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.129 11:50:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.129 11:50:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.129 11:50:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.129 11:50:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.129 11:50:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:23:21.129 11:50:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:23:21.129 11:50:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.129 11:50:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.129 11:50:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.129 11:50:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.129 11:50:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.129 11:50:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.129 11:50:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.129 11:50:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.129 11:50:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.129 11:50:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.129 11:50:53 -- paths/export.sh@5 -- # export PATH 00:23:21.129 11:50:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.129 11:50:53 -- nvmf/common.sh@46 -- # : 0 00:23:21.129 11:50:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:21.129 11:50:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:21.129 11:50:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:21.129 11:50:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.129 11:50:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.129 11:50:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:21.129 11:50:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:21.129 11:50:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:21.129 11:50:53 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.129 11:50:53 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.129 11:50:53 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:21.129 11:50:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:21.129 11:50:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.129 11:50:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:21.129 11:50:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:21.129 11:50:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:21.129 11:50:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.129 11:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.129 11:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.129 11:50:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:21.129 11:50:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:21.129 11:50:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:21.129 11:50:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:21.129 11:50:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:21.129 11:50:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:21.129 11:50:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.129 11:50:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.129 11:50:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:21.129 11:50:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:21.129 11:50:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:21.129 11:50:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:21.129 11:50:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:21.129 11:50:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.129 11:50:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:21.129 11:50:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:21.129 11:50:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:21.129 11:50:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:21.129 11:50:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:21.129 11:50:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:21.129 Cannot find device "nvmf_tgt_br" 00:23:21.129 11:50:54 -- nvmf/common.sh@154 -- # true 00:23:21.129 11:50:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.129 Cannot find device "nvmf_tgt_br2" 00:23:21.129 11:50:54 -- nvmf/common.sh@155 -- # true 00:23:21.129 11:50:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:21.129 11:50:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:21.129 Cannot find device "nvmf_tgt_br" 00:23:21.129 11:50:54 -- nvmf/common.sh@157 -- # true 00:23:21.129 11:50:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:21.129 Cannot find device "nvmf_tgt_br2" 00:23:21.129 11:50:54 -- nvmf/common.sh@158 -- # true 00:23:21.129 11:50:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:21.129 11:50:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:21.129 11:50:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.129 11:50:54 -- nvmf/common.sh@161 -- # true 00:23:21.129 11:50:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.389 11:50:54 -- nvmf/common.sh@162 -- # true 00:23:21.389 11:50:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:21.389 11:50:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:21.389 11:50:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:21.389 11:50:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:21.389 11:50:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:21.389 11:50:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:21.389 11:50:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:21.389 11:50:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:21.389 11:50:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:21.389 11:50:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:21.390 11:50:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:21.390 11:50:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:21.390 11:50:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:21.390 11:50:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.390 11:50:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:21.390 11:50:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:21.390 11:50:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:21.390 11:50:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:21.390 11:50:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:21.390 11:50:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.390 11:50:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.390 11:50:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.390 11:50:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.390 11:50:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:21.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:23:21.390 00:23:21.390 --- 10.0.0.2 ping statistics --- 00:23:21.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.390 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:21.390 11:50:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:21.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:21.390 00:23:21.390 --- 10.0.0.3 ping statistics --- 00:23:21.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.390 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:21.390 11:50:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:21.390 00:23:21.390 --- 10.0.0.1 ping statistics --- 00:23:21.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.390 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:21.390 11:50:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.390 11:50:54 -- nvmf/common.sh@421 -- # return 0 00:23:21.390 11:50:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:21.390 11:50:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.390 11:50:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:21.390 11:50:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:21.390 11:50:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.390 11:50:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:21.390 11:50:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:21.390 11:50:54 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:21.390 11:50:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:21.390 11:50:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.390 11:50:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.390 11:50:54 -- nvmf/common.sh@469 -- # nvmfpid=81225 00:23:21.390 11:50:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:21.390 11:50:54 -- nvmf/common.sh@470 -- # waitforlisten 81225 00:23:21.390 11:50:54 -- common/autotest_common.sh@829 -- # '[' -z 81225 ']' 00:23:21.390 11:50:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.390 11:50:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.390 11:50:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.390 11:50:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.390 11:50:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.650 [2024-11-20 11:50:54.443629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:21.650 [2024-11-20 11:50:54.443724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.650 [2024-11-20 11:50:54.581091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.650 [2024-11-20 11:50:54.664761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:21.650 [2024-11-20 11:50:54.664880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.650 [2024-11-20 11:50:54.664887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.650 [2024-11-20 11:50:54.664893] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.650 [2024-11-20 11:50:54.665021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.650 [2024-11-20 11:50:54.665379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.650 [2024-11-20 11:50:54.665119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.650 [2024-11-20 11:50:54.665383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.589 11:50:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.589 11:50:55 -- common/autotest_common.sh@862 -- # return 0 00:23:22.589 11:50:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:22.589 11:50:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 11:50:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:22.589 11:50:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 Malloc0 00:23:22.589 11:50:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:22.589 11:50:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 Delay0 00:23:22.589 11:50:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:22.589 11:50:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 [2024-11-20 11:50:55.367377] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.589 11:50:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:22.589 11:50:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 11:50:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:22.589 11:50:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 11:50:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.589 11:50:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 [2024-11-20 11:50:55.407451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.589 11:50:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:22.589 11:50:55 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:22.589 11:50:55 -- common/autotest_common.sh@1187 -- # local i=0 00:23:22.589 11:50:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:23:22.589 11:50:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:23:22.589 11:50:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:23:25.127 11:50:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:23:25.127 11:50:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:23:25.127 11:50:57 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:23:25.127 11:50:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:23:25.127 11:50:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:23:25.127 11:50:57 -- common/autotest_common.sh@1197 -- # return 0 00:23:25.127 11:50:57 -- target/initiator_timeout.sh@35 -- # fio_pid=81309 00:23:25.127 11:50:57 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:25.127 11:50:57 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:25.127 [global] 00:23:25.127 thread=1 00:23:25.127 invalidate=1 00:23:25.127 rw=write 00:23:25.127 time_based=1 00:23:25.127 runtime=60 00:23:25.127 ioengine=libaio 00:23:25.127 direct=1 00:23:25.127 bs=4096 00:23:25.127 iodepth=1 00:23:25.127 norandommap=0 00:23:25.127 numjobs=1 00:23:25.127 00:23:25.127 verify_dump=1 00:23:25.127 verify_backlog=512 00:23:25.127 verify_state_save=0 00:23:25.127 do_verify=1 00:23:25.127 verify=crc32c-intel 00:23:25.127 [job0] 00:23:25.127 filename=/dev/nvme0n1 00:23:25.127 Could not set queue depth (nvme0n1) 00:23:25.127 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:25.127 fio-3.35 00:23:25.127 Starting 1 thread 00:23:27.666 11:51:00 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:27.666 11:51:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.666 11:51:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.666 true 00:23:27.666 11:51:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.666 11:51:00 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:27.666 11:51:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.666 11:51:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.666 true 00:23:27.666 11:51:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.666 11:51:00 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:27.666 11:51:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.666 11:51:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.666 true 00:23:27.666 11:51:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.666 11:51:00 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:27.666 11:51:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.666 11:51:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.666 true 00:23:27.666 11:51:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.666 11:51:00 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:30.963 11:51:03 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:30.963 11:51:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.963 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:30.963 true 00:23:30.963 11:51:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.963 11:51:03 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:30.963 11:51:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.963 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:30.963 true 00:23:30.963 11:51:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.963 11:51:03 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:30.963 11:51:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.963 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:30.963 true 00:23:30.963 11:51:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.963 11:51:03 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:30.963 11:51:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.963 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:30.963 true 00:23:30.963 11:51:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.963 11:51:03 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:30.963 11:51:03 -- target/initiator_timeout.sh@54 -- # wait 81309 00:24:27.215 00:24:27.215 job0: (groupid=0, jobs=1): err= 0: pid=81336: Wed Nov 20 11:51:57 2024 00:24:27.215 read: IOPS=1211, BW=4847KiB/s (4963kB/s)(284MiB/60000msec) 00:24:27.215 slat (nsec): min=6010, max=27542, avg=7004.18, stdev=864.33 00:24:27.215 clat (usec): min=114, max=761, avg=134.90, stdev=13.76 00:24:27.215 lat (usec): min=120, max=768, avg=141.91, stdev=14.07 00:24:27.215 clat percentiles (usec): 00:24:27.215 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:24:27.215 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:24:27.215 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:24:27.215 | 99.00th=[ 167], 99.50th=[ 186], 99.90th=[ 347], 99.95th=[ 437], 00:24:27.215 | 99.99th=[ 494] 00:24:27.215 write: IOPS=1219, BW=4877KiB/s (4994kB/s)(286MiB/60000msec); 0 zone resets 00:24:27.215 slat (usec): min=7, max=13720, avg=10.89, stdev=58.39 00:24:27.215 clat (usec): min=31, max=40545k, avg=666.62, stdev=149902.09 00:24:27.215 lat (usec): min=103, max=40545k, avg=677.51, stdev=149902.11 00:24:27.215 clat percentiles (usec): 00:24:27.215 | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 108], 00:24:27.216 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 114], 00:24:27.216 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 124], 00:24:27.216 | 99.00th=[ 137], 99.50th=[ 147], 99.90th=[ 167], 99.95th=[ 188], 00:24:27.216 | 99.99th=[ 247] 00:24:27.216 bw ( KiB/s): min= 6264, max=16384, per=100.00%, avg=14675.28, stdev=2144.22, samples=39 00:24:27.216 iops : min= 1566, max= 4096, avg=3668.82, stdev=536.05, samples=39 00:24:27.216 lat (usec) : 50=0.01%, 100=0.54%, 250=99.37%, 500=0.08%, 750=0.01% 00:24:27.216 lat (usec) : 1000=0.01% 00:24:27.216 lat (msec) : >=2000=0.01% 00:24:27.216 cpu : usr=0.39%, sys=1.60%, ctx=145871, majf=0, minf=5 00:24:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.216 issued rwts: total=72704,73157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:27.216 00:24:27.216 Run status group 0 (all jobs): 00:24:27.216 READ: bw=4847KiB/s (4963kB/s), 4847KiB/s-4847KiB/s (4963kB/s-4963kB/s), io=284MiB (298MB), run=60000-60000msec 00:24:27.216 WRITE: bw=4877KiB/s (4994kB/s), 4877KiB/s-4877KiB/s (4994kB/s-4994kB/s), io=286MiB (300MB), run=60000-60000msec 00:24:27.216 00:24:27.216 Disk stats (read/write): 00:24:27.216 nvme0n1: ios=72718/72704, merge=0/0, ticks=10041/8385, in_queue=18426, util=99.73% 00:24:27.216 11:51:57 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:27.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:27.216 11:51:57 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:27.216 11:51:57 -- common/autotest_common.sh@1208 -- # local i=0 00:24:27.216 11:51:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:24:27.216 11:51:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:27.216 11:51:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:24:27.216 11:51:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:27.216 11:51:58 -- common/autotest_common.sh@1220 -- # return 0 00:24:27.216 11:51:58 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:27.216 nvmf hotplug test: fio successful as expected 00:24:27.216 11:51:58 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:27.216 11:51:58 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.216 11:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.216 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:27.216 11:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.216 11:51:58 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:27.216 11:51:58 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:27.216 11:51:58 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:27.216 11:51:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.216 11:51:58 -- nvmf/common.sh@116 -- # sync 00:24:27.216 11:51:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:27.216 11:51:58 -- nvmf/common.sh@119 -- # set +e 00:24:27.216 11:51:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.216 11:51:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:27.216 rmmod nvme_tcp 00:24:27.216 rmmod nvme_fabrics 00:24:27.216 rmmod nvme_keyring 00:24:27.216 11:51:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.216 11:51:58 -- nvmf/common.sh@123 -- # set -e 00:24:27.216 11:51:58 -- nvmf/common.sh@124 -- # return 0 00:24:27.216 11:51:58 -- nvmf/common.sh@477 -- # '[' -n 81225 ']' 00:24:27.216 11:51:58 -- nvmf/common.sh@478 -- # killprocess 81225 00:24:27.216 11:51:58 -- common/autotest_common.sh@936 -- # '[' -z 81225 ']' 00:24:27.216 11:51:58 -- common/autotest_common.sh@940 -- # kill -0 81225 00:24:27.216 11:51:58 -- common/autotest_common.sh@941 -- # uname 00:24:27.216 11:51:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.216 11:51:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81225 00:24:27.216 11:51:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:27.216 11:51:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:27.216 killing process with pid 81225 00:24:27.216 11:51:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81225' 00:24:27.216 11:51:58 -- common/autotest_common.sh@955 -- # kill 81225 00:24:27.216 11:51:58 -- common/autotest_common.sh@960 -- # wait 81225 00:24:27.216 11:51:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:27.216 11:51:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:27.216 11:51:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:27.216 11:51:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.216 11:51:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:27.216 11:51:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.216 11:51:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.216 11:51:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.216 11:51:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:27.216 00:24:27.216 real 1m4.690s 00:24:27.216 user 4m8.465s 00:24:27.216 sys 0m6.926s 00:24:27.216 11:51:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:27.216 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:27.216 ************************************ 00:24:27.216 END TEST nvmf_initiator_timeout 00:24:27.216 ************************************ 00:24:27.216 11:51:58 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:24:27.216 11:51:58 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:27.216 11:51:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.216 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:27.216 11:51:58 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:27.216 11:51:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:27.216 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:27.216 11:51:58 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:27.216 11:51:58 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:27.216 11:51:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:27.216 11:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.216 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:27.216 ************************************ 00:24:27.216 START TEST nvmf_multicontroller 00:24:27.216 ************************************ 00:24:27.216 11:51:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:27.216 * Looking for test storage... 00:24:27.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:27.216 11:51:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:27.216 11:51:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:27.216 11:51:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:27.216 11:51:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:27.216 11:51:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:27.216 11:51:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:27.216 11:51:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:27.216 11:51:58 -- scripts/common.sh@335 -- # IFS=.-: 00:24:27.216 11:51:58 -- scripts/common.sh@335 -- # read -ra ver1 00:24:27.216 11:51:58 -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.216 11:51:58 -- scripts/common.sh@336 -- # read -ra ver2 00:24:27.216 11:51:58 -- scripts/common.sh@337 -- # local 'op=<' 00:24:27.216 11:51:58 -- scripts/common.sh@339 -- # ver1_l=2 00:24:27.216 11:51:58 -- scripts/common.sh@340 -- # ver2_l=1 00:24:27.216 11:51:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:27.216 11:51:58 -- scripts/common.sh@343 -- # case "$op" in 00:24:27.216 11:51:58 -- scripts/common.sh@344 -- # : 1 00:24:27.216 11:51:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:27.216 11:51:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.216 11:51:58 -- scripts/common.sh@364 -- # decimal 1 00:24:27.216 11:51:58 -- scripts/common.sh@352 -- # local d=1 00:24:27.216 11:51:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.216 11:51:58 -- scripts/common.sh@354 -- # echo 1 00:24:27.216 11:51:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:27.216 11:51:58 -- scripts/common.sh@365 -- # decimal 2 00:24:27.216 11:51:58 -- scripts/common.sh@352 -- # local d=2 00:24:27.216 11:51:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.216 11:51:58 -- scripts/common.sh@354 -- # echo 2 00:24:27.216 11:51:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:27.216 11:51:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:27.216 11:51:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:27.216 11:51:58 -- scripts/common.sh@367 -- # return 0 00:24:27.216 11:51:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.216 11:51:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:27.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.216 --rc genhtml_branch_coverage=1 00:24:27.216 --rc genhtml_function_coverage=1 00:24:27.216 --rc genhtml_legend=1 00:24:27.216 --rc geninfo_all_blocks=1 00:24:27.216 --rc geninfo_unexecuted_blocks=1 00:24:27.217 00:24:27.217 ' 00:24:27.217 11:51:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:27.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.217 --rc genhtml_branch_coverage=1 00:24:27.217 --rc genhtml_function_coverage=1 00:24:27.217 --rc genhtml_legend=1 00:24:27.217 --rc geninfo_all_blocks=1 00:24:27.217 --rc geninfo_unexecuted_blocks=1 00:24:27.217 00:24:27.217 ' 00:24:27.217 11:51:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:27.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.217 --rc genhtml_branch_coverage=1 00:24:27.217 --rc genhtml_function_coverage=1 00:24:27.217 --rc genhtml_legend=1 00:24:27.217 --rc geninfo_all_blocks=1 00:24:27.217 --rc geninfo_unexecuted_blocks=1 00:24:27.217 00:24:27.217 ' 00:24:27.217 11:51:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:27.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.217 --rc genhtml_branch_coverage=1 00:24:27.217 --rc genhtml_function_coverage=1 00:24:27.217 --rc genhtml_legend=1 00:24:27.217 --rc geninfo_all_blocks=1 00:24:27.217 --rc geninfo_unexecuted_blocks=1 00:24:27.217 00:24:27.217 ' 00:24:27.217 11:51:58 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:27.217 11:51:58 -- nvmf/common.sh@7 -- # uname -s 00:24:27.217 11:51:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.217 11:51:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.217 11:51:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.217 11:51:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.217 11:51:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.217 11:51:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.217 11:51:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.217 11:51:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.217 11:51:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.217 11:51:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.217 11:51:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:24:27.217 11:51:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:24:27.217 11:51:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.217 11:51:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.217 11:51:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:27.217 11:51:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:27.217 11:51:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.217 11:51:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.217 11:51:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.217 11:51:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.217 11:51:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.217 11:51:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.217 11:51:58 -- paths/export.sh@5 -- # export PATH 00:24:27.217 11:51:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.217 11:51:58 -- nvmf/common.sh@46 -- # : 0 00:24:27.217 11:51:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:27.217 11:51:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:27.217 11:51:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:27.217 11:51:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.217 11:51:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.217 11:51:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:27.217 11:51:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:27.217 11:51:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:27.217 11:51:58 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.217 11:51:58 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.217 11:51:58 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:27.217 11:51:58 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:27.217 11:51:58 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.217 11:51:58 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:27.217 11:51:58 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:27.217 11:51:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:27.217 11:51:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.217 11:51:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:27.217 11:51:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:27.217 11:51:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:27.217 11:51:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.217 11:51:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.217 11:51:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.217 11:51:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:27.217 11:51:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:27.217 11:51:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:27.217 11:51:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:27.217 11:51:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:27.217 11:51:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:27.217 11:51:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.217 11:51:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.217 11:51:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:27.217 11:51:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:27.217 11:51:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:27.217 11:51:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:27.217 11:51:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:27.217 11:51:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.217 11:51:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:27.217 11:51:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:27.217 11:51:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:27.217 11:51:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:27.217 11:51:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:27.217 11:51:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:27.217 Cannot find device "nvmf_tgt_br" 00:24:27.217 11:51:58 -- nvmf/common.sh@154 -- # true 00:24:27.217 11:51:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:27.217 Cannot find device "nvmf_tgt_br2" 00:24:27.217 11:51:58 -- nvmf/common.sh@155 -- # true 00:24:27.217 11:51:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:27.217 11:51:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:27.217 Cannot find device "nvmf_tgt_br" 00:24:27.217 11:51:58 -- nvmf/common.sh@157 -- # true 00:24:27.217 11:51:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:27.217 Cannot find device "nvmf_tgt_br2" 00:24:27.217 11:51:58 -- nvmf/common.sh@158 -- # true 00:24:27.217 11:51:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:27.217 11:51:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:27.217 11:51:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:27.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.217 11:51:59 -- nvmf/common.sh@161 -- # true 00:24:27.217 11:51:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:27.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.217 11:51:59 -- nvmf/common.sh@162 -- # true 00:24:27.217 11:51:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:27.217 11:51:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:27.217 11:51:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:27.217 11:51:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:27.217 11:51:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:27.217 11:51:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:27.217 11:51:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:27.217 11:51:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:27.217 11:51:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:27.217 11:51:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:27.217 11:51:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:27.217 11:51:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:27.217 11:51:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:27.217 11:51:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:27.217 11:51:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:27.217 11:51:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:27.217 11:51:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:27.217 11:51:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:27.217 11:51:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:27.217 11:51:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:27.217 11:51:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:27.217 11:51:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:27.218 11:51:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:27.218 11:51:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:27.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:24:27.218 00:24:27.218 --- 10.0.0.2 ping statistics --- 00:24:27.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.218 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:27.218 11:51:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:27.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:27.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:24:27.218 00:24:27.218 --- 10.0.0.3 ping statistics --- 00:24:27.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.218 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:27.218 11:51:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:27.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:27.218 00:24:27.218 --- 10.0.0.1 ping statistics --- 00:24:27.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.218 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:27.218 11:51:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.218 11:51:59 -- nvmf/common.sh@421 -- # return 0 00:24:27.218 11:51:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:27.218 11:51:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.218 11:51:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:27.218 11:51:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:27.218 11:51:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.218 11:51:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:27.218 11:51:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:27.218 11:51:59 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:27.218 11:51:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:27.218 11:51:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:27.218 11:51:59 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 11:51:59 -- nvmf/common.sh@469 -- # nvmfpid=82175 00:24:27.218 11:51:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:27.218 11:51:59 -- nvmf/common.sh@470 -- # waitforlisten 82175 00:24:27.218 11:51:59 -- common/autotest_common.sh@829 -- # '[' -z 82175 ']' 00:24:27.218 11:51:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.218 11:51:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.218 11:51:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.218 11:51:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.218 11:51:59 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 [2024-11-20 11:51:59.235049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:27.218 [2024-11-20 11:51:59.235101] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.218 [2024-11-20 11:51:59.371505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:27.218 [2024-11-20 11:51:59.448963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:27.218 [2024-11-20 11:51:59.449078] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.218 [2024-11-20 11:51:59.449086] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.218 [2024-11-20 11:51:59.449091] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.218 [2024-11-20 11:51:59.449265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.218 [2024-11-20 11:51:59.449469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.218 [2024-11-20 11:51:59.449480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.218 11:52:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.218 11:52:00 -- common/autotest_common.sh@862 -- # return 0 00:24:27.218 11:52:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:27.218 11:52:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 11:52:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.218 11:52:00 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 [2024-11-20 11:52:00.140149] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 Malloc0 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 [2024-11-20 11:52:00.208709] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 [2024-11-20 11:52:00.220587] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.218 Malloc1 00:24:27.218 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.218 11:52:00 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:27.218 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.218 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.478 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.478 11:52:00 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:27.478 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.478 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.478 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.478 11:52:00 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:27.478 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.478 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.478 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.478 11:52:00 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:27.478 11:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.478 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.478 11:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.478 11:52:00 -- host/multicontroller.sh@44 -- # bdevperf_pid=82227 00:24:27.478 11:52:00 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:27.478 11:52:00 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.478 11:52:00 -- host/multicontroller.sh@47 -- # waitforlisten 82227 /var/tmp/bdevperf.sock 00:24:27.478 11:52:00 -- common/autotest_common.sh@829 -- # '[' -z 82227 ']' 00:24:27.478 11:52:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.478 11:52:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.478 11:52:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.478 11:52:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.478 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:24:28.415 11:52:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.415 11:52:01 -- common/autotest_common.sh@862 -- # return 0 00:24:28.415 11:52:01 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:28.415 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.415 NVMe0n1 00:24:28.415 11:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.415 11:52:01 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.415 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.415 11:52:01 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:28.415 11:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.415 1 00:24:28.415 11:52:01 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:28.415 11:52:01 -- common/autotest_common.sh@650 -- # local es=0 00:24:28.415 11:52:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:28.415 11:52:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.415 11:52:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:28.415 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.415 2024/11/20 11:52:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:28.415 request: 00:24:28.415 { 00:24:28.415 "method": "bdev_nvme_attach_controller", 00:24:28.415 "params": { 00:24:28.415 "name": "NVMe0", 00:24:28.415 "trtype": "tcp", 00:24:28.415 "traddr": "10.0.0.2", 00:24:28.415 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:28.415 "hostaddr": "10.0.0.2", 00:24:28.415 "hostsvcid": "60000", 00:24:28.415 "adrfam": "ipv4", 00:24:28.415 "trsvcid": "4420", 00:24:28.415 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:24:28.415 } 00:24:28.415 } 00:24:28.415 Got JSON-RPC error response 00:24:28.415 GoRPCClient: error on JSON-RPC call 00:24:28.415 11:52:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:28.415 11:52:01 -- common/autotest_common.sh@653 -- # es=1 00:24:28.415 11:52:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:28.415 11:52:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:28.415 11:52:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:28.415 11:52:01 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:28.415 11:52:01 -- common/autotest_common.sh@650 -- # local es=0 00:24:28.415 11:52:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:28.415 11:52:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.415 11:52:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:28.415 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.415 2024/11/20 11:52:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:28.415 request: 00:24:28.415 { 00:24:28.415 "method": "bdev_nvme_attach_controller", 00:24:28.415 "params": { 00:24:28.415 "name": "NVMe0", 00:24:28.415 "trtype": "tcp", 00:24:28.415 "traddr": "10.0.0.2", 00:24:28.415 "hostaddr": "10.0.0.2", 00:24:28.415 "hostsvcid": "60000", 00:24:28.415 "adrfam": "ipv4", 00:24:28.415 "trsvcid": "4420", 00:24:28.415 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:24:28.415 } 00:24:28.415 } 00:24:28.415 Got JSON-RPC error response 00:24:28.415 GoRPCClient: error on JSON-RPC call 00:24:28.415 11:52:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:28.415 11:52:01 -- common/autotest_common.sh@653 -- # es=1 00:24:28.415 11:52:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:28.415 11:52:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:28.415 11:52:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:28.415 11:52:01 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@650 -- # local es=0 00:24:28.415 11:52:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:28.415 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.415 11:52:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.415 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.415 2024/11/20 11:52:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:24:28.415 request: 00:24:28.415 { 00:24:28.415 "method": "bdev_nvme_attach_controller", 00:24:28.415 "params": { 00:24:28.415 "name": "NVMe0", 00:24:28.415 "trtype": "tcp", 00:24:28.415 "traddr": "10.0.0.2", 00:24:28.415 "hostaddr": "10.0.0.2", 00:24:28.415 "hostsvcid": "60000", 00:24:28.415 "adrfam": "ipv4", 00:24:28.415 "trsvcid": "4420", 00:24:28.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.415 "multipath": "disable" 00:24:28.415 } 00:24:28.415 } 00:24:28.415 Got JSON-RPC error response 00:24:28.415 GoRPCClient: error on JSON-RPC call 00:24:28.415 11:52:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:28.416 11:52:01 -- common/autotest_common.sh@653 -- # es=1 00:24:28.416 11:52:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:28.416 11:52:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:28.416 11:52:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:28.416 11:52:01 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:28.416 11:52:01 -- common/autotest_common.sh@650 -- # local es=0 00:24:28.416 11:52:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:28.416 11:52:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:28.416 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.416 11:52:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:28.416 11:52:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.416 11:52:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:28.416 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.416 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.416 2024/11/20 11:52:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:28.416 request: 00:24:28.416 { 00:24:28.416 "method": "bdev_nvme_attach_controller", 00:24:28.416 "params": { 00:24:28.416 "name": "NVMe0", 00:24:28.416 "trtype": "tcp", 00:24:28.416 "traddr": "10.0.0.2", 00:24:28.416 "hostaddr": "10.0.0.2", 00:24:28.416 "hostsvcid": "60000", 00:24:28.416 "adrfam": "ipv4", 00:24:28.416 "trsvcid": "4420", 00:24:28.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.416 "multipath": "failover" 00:24:28.416 } 00:24:28.416 } 00:24:28.416 Got JSON-RPC error response 00:24:28.416 GoRPCClient: error on JSON-RPC call 00:24:28.416 11:52:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:28.416 11:52:01 -- common/autotest_common.sh@653 -- # es=1 00:24:28.416 11:52:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:28.416 11:52:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:28.416 11:52:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:28.416 11:52:01 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.416 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.416 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.416 00:24:28.416 11:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.416 11:52:01 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.416 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.416 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.674 11:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.674 11:52:01 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:28.674 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.674 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.674 00:24:28.675 11:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.675 11:52:01 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.675 11:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.675 11:52:01 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:28.675 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.675 11:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.675 11:52:01 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:28.675 11:52:01 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:30.055 0 00:24:30.055 11:52:02 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:30.055 11:52:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.055 11:52:02 -- common/autotest_common.sh@10 -- # set +x 00:24:30.055 11:52:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.055 11:52:02 -- host/multicontroller.sh@100 -- # killprocess 82227 00:24:30.055 11:52:02 -- common/autotest_common.sh@936 -- # '[' -z 82227 ']' 00:24:30.055 11:52:02 -- common/autotest_common.sh@940 -- # kill -0 82227 00:24:30.055 11:52:02 -- common/autotest_common.sh@941 -- # uname 00:24:30.055 11:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:30.055 11:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82227 00:24:30.055 11:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:30.055 killing process with pid 82227 00:24:30.055 11:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:30.055 11:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82227' 00:24:30.055 11:52:02 -- common/autotest_common.sh@955 -- # kill 82227 00:24:30.055 11:52:02 -- common/autotest_common.sh@960 -- # wait 82227 00:24:30.055 11:52:02 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.055 11:52:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.055 11:52:02 -- common/autotest_common.sh@10 -- # set +x 00:24:30.055 11:52:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.055 11:52:02 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:30.055 11:52:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.055 11:52:02 -- common/autotest_common.sh@10 -- # set +x 00:24:30.055 11:52:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.055 11:52:02 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:30.055 11:52:02 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:30.055 11:52:02 -- common/autotest_common.sh@1607 -- # read -r file 00:24:30.055 11:52:02 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:24:30.055 11:52:02 -- common/autotest_common.sh@1606 -- # sort -u 00:24:30.055 11:52:02 -- common/autotest_common.sh@1608 -- # cat 00:24:30.055 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:30.055 [2024-11-20 11:52:00.349490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:30.055 [2024-11-20 11:52:00.349562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82227 ] 00:24:30.055 [2024-11-20 11:52:00.470361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.055 [2024-11-20 11:52:00.554511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.055 [2024-11-20 11:52:01.525320] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 7c14827d-f5ed-4247-a034-a9aa5d535f53 already exists 00:24:30.055 [2024-11-20 11:52:01.525364] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:7c14827d-f5ed-4247-a034-a9aa5d535f53 alias for bdev NVMe1n1 00:24:30.056 [2024-11-20 11:52:01.525378] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:30.056 Running I/O for 1 seconds... 00:24:30.056 00:24:30.056 Latency(us) 00:24:30.056 [2024-11-20T11:52:03.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.056 [2024-11-20T11:52:03.099Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:30.056 NVMe0n1 : 1.00 28109.64 109.80 0.00 0.00 4543.83 2475.49 10474.31 00:24:30.056 [2024-11-20T11:52:03.099Z] =================================================================================================================== 00:24:30.056 [2024-11-20T11:52:03.099Z] Total : 28109.64 109.80 0.00 0.00 4543.83 2475.49 10474.31 00:24:30.056 Received shutdown signal, test time was about 1.000000 seconds 00:24:30.056 00:24:30.056 Latency(us) 00:24:30.056 [2024-11-20T11:52:03.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.056 [2024-11-20T11:52:03.099Z] =================================================================================================================== 00:24:30.056 [2024-11-20T11:52:03.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.056 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:30.056 11:52:02 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:30.056 11:52:02 -- common/autotest_common.sh@1607 -- # read -r file 00:24:30.056 11:52:02 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:30.056 11:52:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:30.056 11:52:02 -- nvmf/common.sh@116 -- # sync 00:24:30.056 11:52:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:30.056 11:52:03 -- nvmf/common.sh@119 -- # set +e 00:24:30.056 11:52:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:30.056 11:52:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:30.056 rmmod nvme_tcp 00:24:30.056 rmmod nvme_fabrics 00:24:30.315 rmmod nvme_keyring 00:24:30.315 11:52:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.315 11:52:03 -- nvmf/common.sh@123 -- # set -e 00:24:30.315 11:52:03 -- nvmf/common.sh@124 -- # return 0 00:24:30.315 11:52:03 -- nvmf/common.sh@477 -- # '[' -n 82175 ']' 00:24:30.315 11:52:03 -- nvmf/common.sh@478 -- # killprocess 82175 00:24:30.315 11:52:03 -- common/autotest_common.sh@936 -- # '[' -z 82175 ']' 00:24:30.315 11:52:03 -- common/autotest_common.sh@940 -- # kill -0 82175 00:24:30.315 11:52:03 -- common/autotest_common.sh@941 -- # uname 00:24:30.315 11:52:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:30.315 11:52:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82175 00:24:30.315 11:52:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:30.315 11:52:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:30.315 killing process with pid 82175 00:24:30.315 11:52:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82175' 00:24:30.315 11:52:03 -- common/autotest_common.sh@955 -- # kill 82175 00:24:30.315 11:52:03 -- common/autotest_common.sh@960 -- # wait 82175 00:24:30.586 11:52:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.586 11:52:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.586 11:52:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.586 11:52:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.586 11:52:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.586 11:52:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.586 11:52:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.586 11:52:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.586 11:52:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:30.586 00:24:30.586 real 0m4.881s 00:24:30.586 user 0m14.926s 00:24:30.586 sys 0m1.075s 00:24:30.586 11:52:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:30.586 11:52:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.586 ************************************ 00:24:30.586 END TEST nvmf_multicontroller 00:24:30.586 ************************************ 00:24:30.586 11:52:03 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.586 11:52:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:30.586 11:52:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:30.586 11:52:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.586 ************************************ 00:24:30.586 START TEST nvmf_aer 00:24:30.586 ************************************ 00:24:30.587 11:52:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.863 * Looking for test storage... 00:24:30.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:30.863 11:52:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:30.863 11:52:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:30.863 11:52:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:30.863 11:52:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:30.863 11:52:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:30.863 11:52:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:30.863 11:52:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:30.863 11:52:03 -- scripts/common.sh@335 -- # IFS=.-: 00:24:30.863 11:52:03 -- scripts/common.sh@335 -- # read -ra ver1 00:24:30.863 11:52:03 -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.863 11:52:03 -- scripts/common.sh@336 -- # read -ra ver2 00:24:30.863 11:52:03 -- scripts/common.sh@337 -- # local 'op=<' 00:24:30.863 11:52:03 -- scripts/common.sh@339 -- # ver1_l=2 00:24:30.863 11:52:03 -- scripts/common.sh@340 -- # ver2_l=1 00:24:30.863 11:52:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:30.863 11:52:03 -- scripts/common.sh@343 -- # case "$op" in 00:24:30.863 11:52:03 -- scripts/common.sh@344 -- # : 1 00:24:30.863 11:52:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:30.863 11:52:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.863 11:52:03 -- scripts/common.sh@364 -- # decimal 1 00:24:30.863 11:52:03 -- scripts/common.sh@352 -- # local d=1 00:24:30.863 11:52:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.863 11:52:03 -- scripts/common.sh@354 -- # echo 1 00:24:30.863 11:52:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:30.863 11:52:03 -- scripts/common.sh@365 -- # decimal 2 00:24:30.863 11:52:03 -- scripts/common.sh@352 -- # local d=2 00:24:30.863 11:52:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.863 11:52:03 -- scripts/common.sh@354 -- # echo 2 00:24:30.863 11:52:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:30.863 11:52:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:30.863 11:52:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:30.863 11:52:03 -- scripts/common.sh@367 -- # return 0 00:24:30.863 11:52:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.863 11:52:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.863 --rc genhtml_branch_coverage=1 00:24:30.863 --rc genhtml_function_coverage=1 00:24:30.863 --rc genhtml_legend=1 00:24:30.863 --rc geninfo_all_blocks=1 00:24:30.863 --rc geninfo_unexecuted_blocks=1 00:24:30.863 00:24:30.863 ' 00:24:30.863 11:52:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.863 --rc genhtml_branch_coverage=1 00:24:30.863 --rc genhtml_function_coverage=1 00:24:30.863 --rc genhtml_legend=1 00:24:30.863 --rc geninfo_all_blocks=1 00:24:30.863 --rc geninfo_unexecuted_blocks=1 00:24:30.863 00:24:30.863 ' 00:24:30.863 11:52:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.863 --rc genhtml_branch_coverage=1 00:24:30.863 --rc genhtml_function_coverage=1 00:24:30.863 --rc genhtml_legend=1 00:24:30.863 --rc geninfo_all_blocks=1 00:24:30.863 --rc geninfo_unexecuted_blocks=1 00:24:30.863 00:24:30.863 ' 00:24:30.863 11:52:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.863 --rc genhtml_branch_coverage=1 00:24:30.863 --rc genhtml_function_coverage=1 00:24:30.863 --rc genhtml_legend=1 00:24:30.863 --rc geninfo_all_blocks=1 00:24:30.863 --rc geninfo_unexecuted_blocks=1 00:24:30.863 00:24:30.863 ' 00:24:30.863 11:52:03 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:30.863 11:52:03 -- nvmf/common.sh@7 -- # uname -s 00:24:30.863 11:52:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.863 11:52:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.863 11:52:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.863 11:52:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.863 11:52:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.863 11:52:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.863 11:52:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.863 11:52:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.863 11:52:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.863 11:52:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.863 11:52:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:24:30.863 11:52:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:24:30.863 11:52:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.863 11:52:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.863 11:52:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:30.863 11:52:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:30.863 11:52:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.863 11:52:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.863 11:52:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.863 11:52:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.863 11:52:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.863 11:52:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.863 11:52:03 -- paths/export.sh@5 -- # export PATH 00:24:30.863 11:52:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.863 11:52:03 -- nvmf/common.sh@46 -- # : 0 00:24:30.863 11:52:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:30.863 11:52:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:30.863 11:52:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:30.863 11:52:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.863 11:52:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.863 11:52:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:30.863 11:52:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:30.863 11:52:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:30.863 11:52:03 -- host/aer.sh@11 -- # nvmftestinit 00:24:30.863 11:52:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:30.863 11:52:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.863 11:52:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:30.863 11:52:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:30.863 11:52:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:30.863 11:52:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.863 11:52:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.863 11:52:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.864 11:52:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:30.864 11:52:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:30.864 11:52:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:30.864 11:52:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:30.864 11:52:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:30.864 11:52:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:30.864 11:52:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.864 11:52:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.864 11:52:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:30.864 11:52:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:30.864 11:52:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:30.864 11:52:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:30.864 11:52:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:30.864 11:52:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.864 11:52:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:30.864 11:52:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:30.864 11:52:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:30.864 11:52:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:30.864 11:52:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:30.864 11:52:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:30.864 Cannot find device "nvmf_tgt_br" 00:24:30.864 11:52:03 -- nvmf/common.sh@154 -- # true 00:24:30.864 11:52:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.864 Cannot find device "nvmf_tgt_br2" 00:24:30.864 11:52:03 -- nvmf/common.sh@155 -- # true 00:24:30.864 11:52:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:30.864 11:52:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:30.864 Cannot find device "nvmf_tgt_br" 00:24:30.864 11:52:03 -- nvmf/common.sh@157 -- # true 00:24:30.864 11:52:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:30.864 Cannot find device "nvmf_tgt_br2" 00:24:30.864 11:52:03 -- nvmf/common.sh@158 -- # true 00:24:30.864 11:52:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:30.864 11:52:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:30.864 11:52:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.864 11:52:03 -- nvmf/common.sh@161 -- # true 00:24:30.864 11:52:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.124 11:52:03 -- nvmf/common.sh@162 -- # true 00:24:31.124 11:52:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:31.124 11:52:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:31.124 11:52:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:31.124 11:52:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:31.124 11:52:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:31.124 11:52:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:31.124 11:52:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:31.124 11:52:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:31.124 11:52:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:31.124 11:52:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:31.124 11:52:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:31.124 11:52:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:31.124 11:52:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:31.124 11:52:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:31.124 11:52:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:31.124 11:52:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:31.124 11:52:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:31.124 11:52:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:31.124 11:52:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:31.124 11:52:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:31.124 11:52:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:31.124 11:52:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:31.124 11:52:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:31.124 11:52:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:31.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:24:31.124 00:24:31.124 --- 10.0.0.2 ping statistics --- 00:24:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.124 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:31.124 11:52:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:31.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:31.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:31.124 00:24:31.124 --- 10.0.0.3 ping statistics --- 00:24:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.124 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:31.124 11:52:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:31.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:24:31.124 00:24:31.124 --- 10.0.0.1 ping statistics --- 00:24:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.124 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:24:31.124 11:52:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.124 11:52:04 -- nvmf/common.sh@421 -- # return 0 00:24:31.124 11:52:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:31.124 11:52:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.124 11:52:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:31.124 11:52:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:31.124 11:52:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.124 11:52:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:31.125 11:52:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:31.125 11:52:04 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:31.125 11:52:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:31.125 11:52:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.125 11:52:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.125 11:52:04 -- nvmf/common.sh@469 -- # nvmfpid=82485 00:24:31.125 11:52:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.125 11:52:04 -- nvmf/common.sh@470 -- # waitforlisten 82485 00:24:31.125 11:52:04 -- common/autotest_common.sh@829 -- # '[' -z 82485 ']' 00:24:31.125 11:52:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.125 11:52:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.125 11:52:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.125 11:52:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.125 11:52:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.125 [2024-11-20 11:52:04.122017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:31.125 [2024-11-20 11:52:04.122093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.384 [2024-11-20 11:52:04.258186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.384 [2024-11-20 11:52:04.335967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:31.384 [2024-11-20 11:52:04.336100] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.384 [2024-11-20 11:52:04.336106] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.384 [2024-11-20 11:52:04.336111] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.384 [2024-11-20 11:52:04.336313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.384 [2024-11-20 11:52:04.336638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.385 [2024-11-20 11:52:04.336715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.385 [2024-11-20 11:52:04.336721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.954 11:52:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.954 11:52:04 -- common/autotest_common.sh@862 -- # return 0 00:24:31.954 11:52:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:31.954 11:52:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.954 11:52:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.954 11:52:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.954 11:52:04 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.954 11:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.954 11:52:04 -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 [2024-11-20 11:52:05.008578] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.213 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.213 11:52:05 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:32.213 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.213 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 Malloc0 00:24:32.213 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.213 11:52:05 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:32.213 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.213 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.213 11:52:05 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.213 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.214 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.214 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.214 11:52:05 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.214 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.214 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.214 [2024-11-20 11:52:05.076569] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.214 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.214 11:52:05 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:32.214 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.214 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.214 [2024-11-20 11:52:05.088368] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:32.214 [ 00:24:32.214 { 00:24:32.214 "allow_any_host": true, 00:24:32.214 "hosts": [], 00:24:32.214 "listen_addresses": [], 00:24:32.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:32.214 "subtype": "Discovery" 00:24:32.214 }, 00:24:32.214 { 00:24:32.214 "allow_any_host": true, 00:24:32.214 "hosts": [], 00:24:32.214 "listen_addresses": [ 00:24:32.214 { 00:24:32.214 "adrfam": "IPv4", 00:24:32.214 "traddr": "10.0.0.2", 00:24:32.214 "transport": "TCP", 00:24:32.214 "trsvcid": "4420", 00:24:32.214 "trtype": "TCP" 00:24:32.214 } 00:24:32.214 ], 00:24:32.214 "max_cntlid": 65519, 00:24:32.214 "max_namespaces": 2, 00:24:32.214 "min_cntlid": 1, 00:24:32.214 "model_number": "SPDK bdev Controller", 00:24:32.214 "namespaces": [ 00:24:32.214 { 00:24:32.214 "bdev_name": "Malloc0", 00:24:32.214 "name": "Malloc0", 00:24:32.214 "nguid": "56E21CFB96144586A799F9A9CBB6C968", 00:24:32.214 "nsid": 1, 00:24:32.214 "uuid": "56e21cfb-9614-4586-a799-f9a9cbb6c968" 00:24:32.214 } 00:24:32.214 ], 00:24:32.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.214 "serial_number": "SPDK00000000000001", 00:24:32.214 "subtype": "NVMe" 00:24:32.214 } 00:24:32.214 ] 00:24:32.214 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.214 11:52:05 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:32.214 11:52:05 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:32.214 11:52:05 -- host/aer.sh@33 -- # aerpid=82539 00:24:32.214 11:52:05 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:32.214 11:52:05 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:32.214 11:52:05 -- common/autotest_common.sh@1254 -- # local i=0 00:24:32.214 11:52:05 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.214 11:52:05 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:24:32.214 11:52:05 -- common/autotest_common.sh@1257 -- # i=1 00:24:32.214 11:52:05 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:32.214 11:52:05 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.214 11:52:05 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:24:32.214 11:52:05 -- common/autotest_common.sh@1257 -- # i=2 00:24:32.214 11:52:05 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:32.474 11:52:05 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.474 11:52:05 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.474 11:52:05 -- common/autotest_common.sh@1265 -- # return 0 00:24:32.474 11:52:05 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:32.474 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.474 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.474 Malloc1 00:24:32.474 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.474 11:52:05 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:32.474 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.474 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.474 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.474 11:52:05 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:32.474 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.474 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.474 Asynchronous Event Request test 00:24:32.474 Attaching to 10.0.0.2 00:24:32.474 Attached to 10.0.0.2 00:24:32.474 Registering asynchronous event callbacks... 00:24:32.474 Starting namespace attribute notice tests for all controllers... 00:24:32.474 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:32.474 aer_cb - Changed Namespace 00:24:32.474 Cleaning up... 00:24:32.474 [ 00:24:32.474 { 00:24:32.474 "allow_any_host": true, 00:24:32.474 "hosts": [], 00:24:32.474 "listen_addresses": [], 00:24:32.474 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:32.474 "subtype": "Discovery" 00:24:32.474 }, 00:24:32.474 { 00:24:32.474 "allow_any_host": true, 00:24:32.474 "hosts": [], 00:24:32.474 "listen_addresses": [ 00:24:32.474 { 00:24:32.474 "adrfam": "IPv4", 00:24:32.474 "traddr": "10.0.0.2", 00:24:32.474 "transport": "TCP", 00:24:32.474 "trsvcid": "4420", 00:24:32.474 "trtype": "TCP" 00:24:32.474 } 00:24:32.474 ], 00:24:32.474 "max_cntlid": 65519, 00:24:32.474 "max_namespaces": 2, 00:24:32.474 "min_cntlid": 1, 00:24:32.474 "model_number": "SPDK bdev Controller", 00:24:32.474 "namespaces": [ 00:24:32.474 { 00:24:32.474 "bdev_name": "Malloc0", 00:24:32.474 "name": "Malloc0", 00:24:32.474 "nguid": "56E21CFB96144586A799F9A9CBB6C968", 00:24:32.474 "nsid": 1, 00:24:32.474 "uuid": "56e21cfb-9614-4586-a799-f9a9cbb6c968" 00:24:32.474 }, 00:24:32.474 { 00:24:32.474 "bdev_name": "Malloc1", 00:24:32.474 "name": "Malloc1", 00:24:32.474 "nguid": "F04DCFD92D854050B43DB94553012427", 00:24:32.474 "nsid": 2, 00:24:32.474 "uuid": "f04dcfd9-2d85-4050-b43d-b94553012427" 00:24:32.474 } 00:24:32.474 ], 00:24:32.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.474 "serial_number": "SPDK00000000000001", 00:24:32.474 "subtype": "NVMe" 00:24:32.474 } 00:24:32.474 ] 00:24:32.474 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.474 11:52:05 -- host/aer.sh@43 -- # wait 82539 00:24:32.474 11:52:05 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:32.474 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.474 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.474 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.474 11:52:05 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:32.474 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.474 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.474 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.474 11:52:05 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.474 11:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.474 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.474 11:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.474 11:52:05 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:32.474 11:52:05 -- host/aer.sh@51 -- # nvmftestfini 00:24:32.474 11:52:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:32.474 11:52:05 -- nvmf/common.sh@116 -- # sync 00:24:32.735 11:52:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:32.735 11:52:05 -- nvmf/common.sh@119 -- # set +e 00:24:32.735 11:52:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:32.735 11:52:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:32.735 rmmod nvme_tcp 00:24:32.735 rmmod nvme_fabrics 00:24:32.735 rmmod nvme_keyring 00:24:32.735 11:52:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:32.735 11:52:05 -- nvmf/common.sh@123 -- # set -e 00:24:32.735 11:52:05 -- nvmf/common.sh@124 -- # return 0 00:24:32.735 11:52:05 -- nvmf/common.sh@477 -- # '[' -n 82485 ']' 00:24:32.735 11:52:05 -- nvmf/common.sh@478 -- # killprocess 82485 00:24:32.735 11:52:05 -- common/autotest_common.sh@936 -- # '[' -z 82485 ']' 00:24:32.735 11:52:05 -- common/autotest_common.sh@940 -- # kill -0 82485 00:24:32.735 11:52:05 -- common/autotest_common.sh@941 -- # uname 00:24:32.735 11:52:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:32.735 11:52:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82485 00:24:32.735 11:52:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:32.735 11:52:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:32.735 killing process with pid 82485 00:24:32.735 11:52:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82485' 00:24:32.735 11:52:05 -- common/autotest_common.sh@955 -- # kill 82485 00:24:32.735 [2024-11-20 11:52:05.621399] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:32.735 11:52:05 -- common/autotest_common.sh@960 -- # wait 82485 00:24:32.995 11:52:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:32.995 11:52:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:32.995 11:52:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:32.995 11:52:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.995 11:52:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:32.995 11:52:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.995 11:52:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.995 11:52:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.995 11:52:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:32.995 00:24:32.995 real 0m2.357s 00:24:32.995 user 0m6.129s 00:24:32.995 sys 0m0.683s 00:24:32.995 11:52:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:32.995 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.995 ************************************ 00:24:32.995 END TEST nvmf_aer 00:24:32.995 ************************************ 00:24:32.995 11:52:05 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:32.995 11:52:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:32.995 11:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:32.995 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.995 ************************************ 00:24:32.995 START TEST nvmf_async_init 00:24:32.995 ************************************ 00:24:32.995 11:52:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:33.256 * Looking for test storage... 00:24:33.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:33.256 11:52:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:33.256 11:52:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:33.256 11:52:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:33.256 11:52:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:33.256 11:52:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:33.256 11:52:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:33.256 11:52:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:33.256 11:52:06 -- scripts/common.sh@335 -- # IFS=.-: 00:24:33.256 11:52:06 -- scripts/common.sh@335 -- # read -ra ver1 00:24:33.256 11:52:06 -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.256 11:52:06 -- scripts/common.sh@336 -- # read -ra ver2 00:24:33.256 11:52:06 -- scripts/common.sh@337 -- # local 'op=<' 00:24:33.256 11:52:06 -- scripts/common.sh@339 -- # ver1_l=2 00:24:33.256 11:52:06 -- scripts/common.sh@340 -- # ver2_l=1 00:24:33.256 11:52:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:33.256 11:52:06 -- scripts/common.sh@343 -- # case "$op" in 00:24:33.256 11:52:06 -- scripts/common.sh@344 -- # : 1 00:24:33.256 11:52:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:33.256 11:52:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.256 11:52:06 -- scripts/common.sh@364 -- # decimal 1 00:24:33.256 11:52:06 -- scripts/common.sh@352 -- # local d=1 00:24:33.256 11:52:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.256 11:52:06 -- scripts/common.sh@354 -- # echo 1 00:24:33.256 11:52:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:33.256 11:52:06 -- scripts/common.sh@365 -- # decimal 2 00:24:33.256 11:52:06 -- scripts/common.sh@352 -- # local d=2 00:24:33.256 11:52:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.256 11:52:06 -- scripts/common.sh@354 -- # echo 2 00:24:33.256 11:52:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:33.256 11:52:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:33.256 11:52:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:33.256 11:52:06 -- scripts/common.sh@367 -- # return 0 00:24:33.256 11:52:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.256 11:52:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:33.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.256 --rc genhtml_branch_coverage=1 00:24:33.256 --rc genhtml_function_coverage=1 00:24:33.256 --rc genhtml_legend=1 00:24:33.256 --rc geninfo_all_blocks=1 00:24:33.256 --rc geninfo_unexecuted_blocks=1 00:24:33.256 00:24:33.256 ' 00:24:33.256 11:52:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:33.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.256 --rc genhtml_branch_coverage=1 00:24:33.256 --rc genhtml_function_coverage=1 00:24:33.256 --rc genhtml_legend=1 00:24:33.256 --rc geninfo_all_blocks=1 00:24:33.256 --rc geninfo_unexecuted_blocks=1 00:24:33.256 00:24:33.256 ' 00:24:33.256 11:52:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:33.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.256 --rc genhtml_branch_coverage=1 00:24:33.256 --rc genhtml_function_coverage=1 00:24:33.256 --rc genhtml_legend=1 00:24:33.256 --rc geninfo_all_blocks=1 00:24:33.256 --rc geninfo_unexecuted_blocks=1 00:24:33.256 00:24:33.256 ' 00:24:33.256 11:52:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:33.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.256 --rc genhtml_branch_coverage=1 00:24:33.256 --rc genhtml_function_coverage=1 00:24:33.256 --rc genhtml_legend=1 00:24:33.256 --rc geninfo_all_blocks=1 00:24:33.256 --rc geninfo_unexecuted_blocks=1 00:24:33.256 00:24:33.256 ' 00:24:33.256 11:52:06 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:33.256 11:52:06 -- nvmf/common.sh@7 -- # uname -s 00:24:33.256 11:52:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.256 11:52:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.256 11:52:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.256 11:52:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.256 11:52:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.256 11:52:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.256 11:52:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.256 11:52:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.256 11:52:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.256 11:52:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.256 11:52:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:24:33.256 11:52:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:24:33.256 11:52:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.256 11:52:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.256 11:52:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:33.256 11:52:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:33.256 11:52:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.256 11:52:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.256 11:52:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.256 11:52:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.256 11:52:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.257 11:52:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.257 11:52:06 -- paths/export.sh@5 -- # export PATH 00:24:33.257 11:52:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.257 11:52:06 -- nvmf/common.sh@46 -- # : 0 00:24:33.257 11:52:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:33.257 11:52:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:33.257 11:52:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:33.257 11:52:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.257 11:52:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.257 11:52:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:33.257 11:52:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:33.257 11:52:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:33.257 11:52:06 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:33.257 11:52:06 -- host/async_init.sh@14 -- # null_block_size=512 00:24:33.257 11:52:06 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:33.257 11:52:06 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:33.257 11:52:06 -- host/async_init.sh@20 -- # uuidgen 00:24:33.257 11:52:06 -- host/async_init.sh@20 -- # tr -d - 00:24:33.257 11:52:06 -- host/async_init.sh@20 -- # nguid=b90c982db7c4473f87553493b93db66b 00:24:33.257 11:52:06 -- host/async_init.sh@22 -- # nvmftestinit 00:24:33.257 11:52:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:33.257 11:52:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.257 11:52:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:33.257 11:52:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:33.257 11:52:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:33.257 11:52:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.257 11:52:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.257 11:52:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.257 11:52:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:33.257 11:52:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:33.257 11:52:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:33.257 11:52:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:33.257 11:52:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:33.257 11:52:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:33.257 11:52:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.257 11:52:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.257 11:52:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:33.257 11:52:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:33.257 11:52:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:33.257 11:52:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:33.257 11:52:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:33.257 11:52:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.257 11:52:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:33.257 11:52:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:33.257 11:52:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:33.257 11:52:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:33.257 11:52:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:33.257 11:52:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:33.257 Cannot find device "nvmf_tgt_br" 00:24:33.257 11:52:06 -- nvmf/common.sh@154 -- # true 00:24:33.257 11:52:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.257 Cannot find device "nvmf_tgt_br2" 00:24:33.257 11:52:06 -- nvmf/common.sh@155 -- # true 00:24:33.257 11:52:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:33.257 11:52:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:33.517 Cannot find device "nvmf_tgt_br" 00:24:33.517 11:52:06 -- nvmf/common.sh@157 -- # true 00:24:33.517 11:52:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:33.517 Cannot find device "nvmf_tgt_br2" 00:24:33.517 11:52:06 -- nvmf/common.sh@158 -- # true 00:24:33.517 11:52:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:33.517 11:52:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:33.517 11:52:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.517 11:52:06 -- nvmf/common.sh@161 -- # true 00:24:33.517 11:52:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.517 11:52:06 -- nvmf/common.sh@162 -- # true 00:24:33.517 11:52:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:33.517 11:52:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:33.517 11:52:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:33.517 11:52:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:33.517 11:52:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:33.517 11:52:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:33.517 11:52:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:33.517 11:52:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:33.517 11:52:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:33.517 11:52:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:33.517 11:52:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:33.517 11:52:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:33.517 11:52:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:33.517 11:52:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:33.517 11:52:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:33.517 11:52:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:33.517 11:52:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:33.517 11:52:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:33.517 11:52:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:33.517 11:52:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:33.517 11:52:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:33.778 11:52:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:33.778 11:52:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:33.778 11:52:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:33.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:24:33.778 00:24:33.778 --- 10.0.0.2 ping statistics --- 00:24:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.778 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:33.778 11:52:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:33.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:33.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:24:33.778 00:24:33.778 --- 10.0.0.3 ping statistics --- 00:24:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.778 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:33.778 11:52:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:33.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:24:33.778 00:24:33.778 --- 10.0.0.1 ping statistics --- 00:24:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.778 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:33.778 11:52:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.778 11:52:06 -- nvmf/common.sh@421 -- # return 0 00:24:33.778 11:52:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:33.778 11:52:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.778 11:52:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:33.778 11:52:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:33.778 11:52:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.778 11:52:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:33.778 11:52:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:33.778 11:52:06 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:33.778 11:52:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:33.778 11:52:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.778 11:52:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.778 11:52:06 -- nvmf/common.sh@469 -- # nvmfpid=82721 00:24:33.778 11:52:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:33.778 11:52:06 -- nvmf/common.sh@470 -- # waitforlisten 82721 00:24:33.778 11:52:06 -- common/autotest_common.sh@829 -- # '[' -z 82721 ']' 00:24:33.778 11:52:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.778 11:52:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.778 11:52:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.778 11:52:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.778 11:52:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.778 [2024-11-20 11:52:06.684889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:33.778 [2024-11-20 11:52:06.684955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.038 [2024-11-20 11:52:06.819938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.038 [2024-11-20 11:52:06.902214] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:34.038 [2024-11-20 11:52:06.902345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.038 [2024-11-20 11:52:06.902352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.038 [2024-11-20 11:52:06.902356] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.038 [2024-11-20 11:52:06.902382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.608 11:52:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.608 11:52:07 -- common/autotest_common.sh@862 -- # return 0 00:24:34.608 11:52:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:34.608 11:52:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.608 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 11:52:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.608 11:52:07 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:34.608 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.608 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 [2024-11-20 11:52:07.596212] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.608 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.608 11:52:07 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:34.608 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.608 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 null0 00:24:34.608 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.608 11:52:07 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:34.608 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.608 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.608 11:52:07 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:34.608 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.608 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.608 11:52:07 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b90c982db7c4473f87553493b93db66b 00:24:34.608 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.609 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.868 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.868 11:52:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:34.868 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.869 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.869 [2024-11-20 11:52:07.657153] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.869 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.869 11:52:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:34.869 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.869 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.869 nvme0n1 00:24:34.869 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.869 11:52:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:34.869 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.869 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:35.128 [ 00:24:35.129 { 00:24:35.129 "aliases": [ 00:24:35.129 "b90c982d-b7c4-473f-8755-3493b93db66b" 00:24:35.129 ], 00:24:35.129 "assigned_rate_limits": { 00:24:35.129 "r_mbytes_per_sec": 0, 00:24:35.129 "rw_ios_per_sec": 0, 00:24:35.129 "rw_mbytes_per_sec": 0, 00:24:35.129 "w_mbytes_per_sec": 0 00:24:35.129 }, 00:24:35.129 "block_size": 512, 00:24:35.129 "claimed": false, 00:24:35.129 "driver_specific": { 00:24:35.129 "mp_policy": "active_passive", 00:24:35.129 "nvme": [ 00:24:35.129 { 00:24:35.129 "ctrlr_data": { 00:24:35.129 "ana_reporting": false, 00:24:35.129 "cntlid": 1, 00:24:35.129 "firmware_revision": "24.01.1", 00:24:35.129 "model_number": "SPDK bdev Controller", 00:24:35.129 "multi_ctrlr": true, 00:24:35.129 "oacs": { 00:24:35.129 "firmware": 0, 00:24:35.129 "format": 0, 00:24:35.129 "ns_manage": 0, 00:24:35.129 "security": 0 00:24:35.129 }, 00:24:35.129 "serial_number": "00000000000000000000", 00:24:35.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.129 "vendor_id": "0x8086" 00:24:35.129 }, 00:24:35.129 "ns_data": { 00:24:35.129 "can_share": true, 00:24:35.129 "id": 1 00:24:35.129 }, 00:24:35.129 "trid": { 00:24:35.129 "adrfam": "IPv4", 00:24:35.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.129 "traddr": "10.0.0.2", 00:24:35.129 "trsvcid": "4420", 00:24:35.129 "trtype": "TCP" 00:24:35.129 }, 00:24:35.129 "vs": { 00:24:35.129 "nvme_version": "1.3" 00:24:35.129 } 00:24:35.129 } 00:24:35.129 ] 00:24:35.129 }, 00:24:35.129 "name": "nvme0n1", 00:24:35.129 "num_blocks": 2097152, 00:24:35.129 "product_name": "NVMe disk", 00:24:35.129 "supported_io_types": { 00:24:35.129 "abort": true, 00:24:35.129 "compare": true, 00:24:35.129 "compare_and_write": true, 00:24:35.129 "flush": true, 00:24:35.129 "nvme_admin": true, 00:24:35.129 "nvme_io": true, 00:24:35.129 "read": true, 00:24:35.129 "reset": true, 00:24:35.129 "unmap": false, 00:24:35.129 "write": true, 00:24:35.129 "write_zeroes": true 00:24:35.129 }, 00:24:35.129 "uuid": "b90c982d-b7c4-473f-8755-3493b93db66b", 00:24:35.129 "zoned": false 00:24:35.129 } 00:24:35.129 ] 00:24:35.129 11:52:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.129 11:52:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:35.129 11:52:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.129 11:52:07 -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 [2024-11-20 11:52:07.932687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:35.129 [2024-11-20 11:52:07.932746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5def90 (9): Bad file descriptor 00:24:35.129 [2024-11-20 11:52:08.064749] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:35.129 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.129 11:52:08 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:35.129 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.129 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 [ 00:24:35.129 { 00:24:35.129 "aliases": [ 00:24:35.129 "b90c982d-b7c4-473f-8755-3493b93db66b" 00:24:35.129 ], 00:24:35.129 "assigned_rate_limits": { 00:24:35.129 "r_mbytes_per_sec": 0, 00:24:35.129 "rw_ios_per_sec": 0, 00:24:35.129 "rw_mbytes_per_sec": 0, 00:24:35.129 "w_mbytes_per_sec": 0 00:24:35.129 }, 00:24:35.129 "block_size": 512, 00:24:35.129 "claimed": false, 00:24:35.129 "driver_specific": { 00:24:35.129 "mp_policy": "active_passive", 00:24:35.129 "nvme": [ 00:24:35.129 { 00:24:35.129 "ctrlr_data": { 00:24:35.129 "ana_reporting": false, 00:24:35.129 "cntlid": 2, 00:24:35.129 "firmware_revision": "24.01.1", 00:24:35.129 "model_number": "SPDK bdev Controller", 00:24:35.129 "multi_ctrlr": true, 00:24:35.129 "oacs": { 00:24:35.129 "firmware": 0, 00:24:35.129 "format": 0, 00:24:35.129 "ns_manage": 0, 00:24:35.129 "security": 0 00:24:35.129 }, 00:24:35.129 "serial_number": "00000000000000000000", 00:24:35.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.129 "vendor_id": "0x8086" 00:24:35.129 }, 00:24:35.129 "ns_data": { 00:24:35.129 "can_share": true, 00:24:35.129 "id": 1 00:24:35.129 }, 00:24:35.129 "trid": { 00:24:35.129 "adrfam": "IPv4", 00:24:35.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.129 "traddr": "10.0.0.2", 00:24:35.129 "trsvcid": "4420", 00:24:35.129 "trtype": "TCP" 00:24:35.129 }, 00:24:35.129 "vs": { 00:24:35.129 "nvme_version": "1.3" 00:24:35.129 } 00:24:35.129 } 00:24:35.129 ] 00:24:35.129 }, 00:24:35.129 "name": "nvme0n1", 00:24:35.129 "num_blocks": 2097152, 00:24:35.129 "product_name": "NVMe disk", 00:24:35.129 "supported_io_types": { 00:24:35.129 "abort": true, 00:24:35.129 "compare": true, 00:24:35.129 "compare_and_write": true, 00:24:35.129 "flush": true, 00:24:35.129 "nvme_admin": true, 00:24:35.129 "nvme_io": true, 00:24:35.129 "read": true, 00:24:35.129 "reset": true, 00:24:35.129 "unmap": false, 00:24:35.129 "write": true, 00:24:35.129 "write_zeroes": true 00:24:35.129 }, 00:24:35.129 "uuid": "b90c982d-b7c4-473f-8755-3493b93db66b", 00:24:35.129 "zoned": false 00:24:35.129 } 00:24:35.129 ] 00:24:35.129 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.129 11:52:08 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.129 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.129 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.129 11:52:08 -- host/async_init.sh@53 -- # mktemp 00:24:35.129 11:52:08 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HsXhjEcPGo 00:24:35.129 11:52:08 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:35.129 11:52:08 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HsXhjEcPGo 00:24:35.129 11:52:08 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:35.129 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.129 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.129 11:52:08 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:35.129 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.129 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 [2024-11-20 11:52:08.148391] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.130 [2024-11-20 11:52:08.148482] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.130 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.130 11:52:08 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HsXhjEcPGo 00:24:35.130 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.130 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.130 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.130 11:52:08 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HsXhjEcPGo 00:24:35.130 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.130 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.390 [2024-11-20 11:52:08.172347] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:35.390 nvme0n1 00:24:35.390 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.390 11:52:08 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:35.390 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.390 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.390 [ 00:24:35.390 { 00:24:35.390 "aliases": [ 00:24:35.390 "b90c982d-b7c4-473f-8755-3493b93db66b" 00:24:35.390 ], 00:24:35.390 "assigned_rate_limits": { 00:24:35.390 "r_mbytes_per_sec": 0, 00:24:35.390 "rw_ios_per_sec": 0, 00:24:35.390 "rw_mbytes_per_sec": 0, 00:24:35.390 "w_mbytes_per_sec": 0 00:24:35.390 }, 00:24:35.390 "block_size": 512, 00:24:35.390 "claimed": false, 00:24:35.390 "driver_specific": { 00:24:35.390 "mp_policy": "active_passive", 00:24:35.390 "nvme": [ 00:24:35.390 { 00:24:35.390 "ctrlr_data": { 00:24:35.390 "ana_reporting": false, 00:24:35.390 "cntlid": 3, 00:24:35.390 "firmware_revision": "24.01.1", 00:24:35.390 "model_number": "SPDK bdev Controller", 00:24:35.390 "multi_ctrlr": true, 00:24:35.390 "oacs": { 00:24:35.390 "firmware": 0, 00:24:35.390 "format": 0, 00:24:35.390 "ns_manage": 0, 00:24:35.390 "security": 0 00:24:35.390 }, 00:24:35.390 "serial_number": "00000000000000000000", 00:24:35.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.390 "vendor_id": "0x8086" 00:24:35.390 }, 00:24:35.390 "ns_data": { 00:24:35.390 "can_share": true, 00:24:35.390 "id": 1 00:24:35.390 }, 00:24:35.390 "trid": { 00:24:35.390 "adrfam": "IPv4", 00:24:35.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.390 "traddr": "10.0.0.2", 00:24:35.390 "trsvcid": "4421", 00:24:35.390 "trtype": "TCP" 00:24:35.390 }, 00:24:35.390 "vs": { 00:24:35.390 "nvme_version": "1.3" 00:24:35.390 } 00:24:35.390 } 00:24:35.390 ] 00:24:35.390 }, 00:24:35.390 "name": "nvme0n1", 00:24:35.390 "num_blocks": 2097152, 00:24:35.390 "product_name": "NVMe disk", 00:24:35.390 "supported_io_types": { 00:24:35.390 "abort": true, 00:24:35.390 "compare": true, 00:24:35.390 "compare_and_write": true, 00:24:35.390 "flush": true, 00:24:35.390 "nvme_admin": true, 00:24:35.390 "nvme_io": true, 00:24:35.390 "read": true, 00:24:35.390 "reset": true, 00:24:35.390 "unmap": false, 00:24:35.390 "write": true, 00:24:35.390 "write_zeroes": true 00:24:35.390 }, 00:24:35.391 "uuid": "b90c982d-b7c4-473f-8755-3493b93db66b", 00:24:35.391 "zoned": false 00:24:35.391 } 00:24:35.391 ] 00:24:35.391 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 11:52:08 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.391 11:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.391 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.391 11:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 11:52:08 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.HsXhjEcPGo 00:24:35.391 11:52:08 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:35.391 11:52:08 -- host/async_init.sh@78 -- # nvmftestfini 00:24:35.391 11:52:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:35.391 11:52:08 -- nvmf/common.sh@116 -- # sync 00:24:35.391 11:52:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:35.391 11:52:08 -- nvmf/common.sh@119 -- # set +e 00:24:35.391 11:52:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:35.391 11:52:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:35.391 rmmod nvme_tcp 00:24:35.391 rmmod nvme_fabrics 00:24:35.391 rmmod nvme_keyring 00:24:35.391 11:52:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:35.391 11:52:08 -- nvmf/common.sh@123 -- # set -e 00:24:35.391 11:52:08 -- nvmf/common.sh@124 -- # return 0 00:24:35.391 11:52:08 -- nvmf/common.sh@477 -- # '[' -n 82721 ']' 00:24:35.391 11:52:08 -- nvmf/common.sh@478 -- # killprocess 82721 00:24:35.391 11:52:08 -- common/autotest_common.sh@936 -- # '[' -z 82721 ']' 00:24:35.391 11:52:08 -- common/autotest_common.sh@940 -- # kill -0 82721 00:24:35.391 11:52:08 -- common/autotest_common.sh@941 -- # uname 00:24:35.391 11:52:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:35.391 11:52:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82721 00:24:35.391 11:52:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:35.391 11:52:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:35.391 11:52:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82721' 00:24:35.391 killing process with pid 82721 00:24:35.391 11:52:08 -- common/autotest_common.sh@955 -- # kill 82721 00:24:35.391 11:52:08 -- common/autotest_common.sh@960 -- # wait 82721 00:24:35.651 11:52:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:35.651 11:52:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:35.651 11:52:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:35.651 11:52:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.651 11:52:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:35.651 11:52:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.651 11:52:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.651 11:52:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.651 11:52:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:35.651 00:24:35.651 real 0m2.723s 00:24:35.651 user 0m2.361s 00:24:35.651 sys 0m0.742s 00:24:35.651 11:52:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:35.651 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.651 ************************************ 00:24:35.651 END TEST nvmf_async_init 00:24:35.651 ************************************ 00:24:35.912 11:52:08 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:35.912 11:52:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:35.912 11:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:35.912 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.912 ************************************ 00:24:35.912 START TEST dma 00:24:35.912 ************************************ 00:24:35.912 11:52:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:35.912 * Looking for test storage... 00:24:35.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:35.912 11:52:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:35.912 11:52:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:35.912 11:52:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:35.912 11:52:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:35.912 11:52:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:35.912 11:52:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:35.912 11:52:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:35.912 11:52:08 -- scripts/common.sh@335 -- # IFS=.-: 00:24:35.912 11:52:08 -- scripts/common.sh@335 -- # read -ra ver1 00:24:35.912 11:52:08 -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.912 11:52:08 -- scripts/common.sh@336 -- # read -ra ver2 00:24:35.912 11:52:08 -- scripts/common.sh@337 -- # local 'op=<' 00:24:35.912 11:52:08 -- scripts/common.sh@339 -- # ver1_l=2 00:24:35.912 11:52:08 -- scripts/common.sh@340 -- # ver2_l=1 00:24:35.912 11:52:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:35.912 11:52:08 -- scripts/common.sh@343 -- # case "$op" in 00:24:35.912 11:52:08 -- scripts/common.sh@344 -- # : 1 00:24:35.912 11:52:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:35.912 11:52:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.912 11:52:08 -- scripts/common.sh@364 -- # decimal 1 00:24:35.912 11:52:08 -- scripts/common.sh@352 -- # local d=1 00:24:35.912 11:52:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.912 11:52:08 -- scripts/common.sh@354 -- # echo 1 00:24:35.912 11:52:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:35.912 11:52:08 -- scripts/common.sh@365 -- # decimal 2 00:24:35.912 11:52:08 -- scripts/common.sh@352 -- # local d=2 00:24:35.912 11:52:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.912 11:52:08 -- scripts/common.sh@354 -- # echo 2 00:24:35.912 11:52:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:35.912 11:52:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:35.912 11:52:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:35.912 11:52:08 -- scripts/common.sh@367 -- # return 0 00:24:35.912 11:52:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.912 11:52:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.912 --rc genhtml_branch_coverage=1 00:24:35.912 --rc genhtml_function_coverage=1 00:24:35.912 --rc genhtml_legend=1 00:24:35.912 --rc geninfo_all_blocks=1 00:24:35.912 --rc geninfo_unexecuted_blocks=1 00:24:35.912 00:24:35.912 ' 00:24:35.912 11:52:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.912 --rc genhtml_branch_coverage=1 00:24:35.912 --rc genhtml_function_coverage=1 00:24:35.912 --rc genhtml_legend=1 00:24:35.912 --rc geninfo_all_blocks=1 00:24:35.912 --rc geninfo_unexecuted_blocks=1 00:24:35.912 00:24:35.912 ' 00:24:35.912 11:52:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.912 --rc genhtml_branch_coverage=1 00:24:35.912 --rc genhtml_function_coverage=1 00:24:35.912 --rc genhtml_legend=1 00:24:35.912 --rc geninfo_all_blocks=1 00:24:35.912 --rc geninfo_unexecuted_blocks=1 00:24:35.912 00:24:35.912 ' 00:24:35.912 11:52:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.912 --rc genhtml_branch_coverage=1 00:24:35.912 --rc genhtml_function_coverage=1 00:24:35.912 --rc genhtml_legend=1 00:24:35.912 --rc geninfo_all_blocks=1 00:24:35.912 --rc geninfo_unexecuted_blocks=1 00:24:35.912 00:24:35.912 ' 00:24:35.912 11:52:08 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:35.912 11:52:08 -- nvmf/common.sh@7 -- # uname -s 00:24:36.173 11:52:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.173 11:52:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.173 11:52:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.173 11:52:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.173 11:52:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.173 11:52:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.173 11:52:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.173 11:52:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.173 11:52:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.173 11:52:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.173 11:52:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:24:36.173 11:52:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:24:36.173 11:52:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.173 11:52:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.173 11:52:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:36.173 11:52:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:36.173 11:52:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.173 11:52:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.173 11:52:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.173 11:52:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.173 11:52:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.173 11:52:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.173 11:52:08 -- paths/export.sh@5 -- # export PATH 00:24:36.173 11:52:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.173 11:52:08 -- nvmf/common.sh@46 -- # : 0 00:24:36.173 11:52:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:36.173 11:52:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:36.173 11:52:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:36.173 11:52:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.173 11:52:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.173 11:52:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:36.173 11:52:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:36.173 11:52:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:36.174 11:52:08 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:36.174 11:52:08 -- host/dma.sh@13 -- # exit 0 00:24:36.174 00:24:36.174 real 0m0.238s 00:24:36.174 user 0m0.138s 00:24:36.174 sys 0m0.117s 00:24:36.174 11:52:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:36.174 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:24:36.174 ************************************ 00:24:36.174 END TEST dma 00:24:36.174 ************************************ 00:24:36.174 11:52:09 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:36.174 11:52:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:36.174 11:52:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:36.174 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.174 ************************************ 00:24:36.174 START TEST nvmf_identify 00:24:36.174 ************************************ 00:24:36.174 11:52:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:36.174 * Looking for test storage... 00:24:36.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:36.174 11:52:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:36.174 11:52:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:36.174 11:52:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:36.434 11:52:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:36.434 11:52:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:36.434 11:52:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:36.434 11:52:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:36.434 11:52:09 -- scripts/common.sh@335 -- # IFS=.-: 00:24:36.434 11:52:09 -- scripts/common.sh@335 -- # read -ra ver1 00:24:36.434 11:52:09 -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.434 11:52:09 -- scripts/common.sh@336 -- # read -ra ver2 00:24:36.434 11:52:09 -- scripts/common.sh@337 -- # local 'op=<' 00:24:36.434 11:52:09 -- scripts/common.sh@339 -- # ver1_l=2 00:24:36.434 11:52:09 -- scripts/common.sh@340 -- # ver2_l=1 00:24:36.434 11:52:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:36.434 11:52:09 -- scripts/common.sh@343 -- # case "$op" in 00:24:36.434 11:52:09 -- scripts/common.sh@344 -- # : 1 00:24:36.434 11:52:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:36.434 11:52:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.434 11:52:09 -- scripts/common.sh@364 -- # decimal 1 00:24:36.434 11:52:09 -- scripts/common.sh@352 -- # local d=1 00:24:36.434 11:52:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.434 11:52:09 -- scripts/common.sh@354 -- # echo 1 00:24:36.434 11:52:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:36.434 11:52:09 -- scripts/common.sh@365 -- # decimal 2 00:24:36.434 11:52:09 -- scripts/common.sh@352 -- # local d=2 00:24:36.434 11:52:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.434 11:52:09 -- scripts/common.sh@354 -- # echo 2 00:24:36.434 11:52:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:36.434 11:52:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:36.434 11:52:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:36.434 11:52:09 -- scripts/common.sh@367 -- # return 0 00:24:36.434 11:52:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.434 11:52:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.434 --rc genhtml_branch_coverage=1 00:24:36.434 --rc genhtml_function_coverage=1 00:24:36.434 --rc genhtml_legend=1 00:24:36.434 --rc geninfo_all_blocks=1 00:24:36.434 --rc geninfo_unexecuted_blocks=1 00:24:36.434 00:24:36.434 ' 00:24:36.434 11:52:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.434 --rc genhtml_branch_coverage=1 00:24:36.434 --rc genhtml_function_coverage=1 00:24:36.434 --rc genhtml_legend=1 00:24:36.434 --rc geninfo_all_blocks=1 00:24:36.434 --rc geninfo_unexecuted_blocks=1 00:24:36.434 00:24:36.434 ' 00:24:36.434 11:52:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.434 --rc genhtml_branch_coverage=1 00:24:36.434 --rc genhtml_function_coverage=1 00:24:36.434 --rc genhtml_legend=1 00:24:36.434 --rc geninfo_all_blocks=1 00:24:36.434 --rc geninfo_unexecuted_blocks=1 00:24:36.434 00:24:36.434 ' 00:24:36.434 11:52:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.434 --rc genhtml_branch_coverage=1 00:24:36.434 --rc genhtml_function_coverage=1 00:24:36.434 --rc genhtml_legend=1 00:24:36.434 --rc geninfo_all_blocks=1 00:24:36.434 --rc geninfo_unexecuted_blocks=1 00:24:36.434 00:24:36.434 ' 00:24:36.434 11:52:09 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:36.434 11:52:09 -- nvmf/common.sh@7 -- # uname -s 00:24:36.434 11:52:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.434 11:52:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.434 11:52:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.434 11:52:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.434 11:52:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.434 11:52:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.434 11:52:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.434 11:52:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.434 11:52:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.434 11:52:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.434 11:52:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:24:36.434 11:52:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:24:36.434 11:52:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.434 11:52:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.434 11:52:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:36.434 11:52:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:36.434 11:52:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.434 11:52:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.434 11:52:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.434 11:52:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.434 11:52:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.434 11:52:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.434 11:52:09 -- paths/export.sh@5 -- # export PATH 00:24:36.434 11:52:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.434 11:52:09 -- nvmf/common.sh@46 -- # : 0 00:24:36.434 11:52:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:36.434 11:52:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:36.434 11:52:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:36.434 11:52:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.434 11:52:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.434 11:52:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:36.434 11:52:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:36.434 11:52:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:36.434 11:52:09 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.434 11:52:09 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.434 11:52:09 -- host/identify.sh@14 -- # nvmftestinit 00:24:36.434 11:52:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:36.434 11:52:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.434 11:52:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:36.434 11:52:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:36.434 11:52:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:36.434 11:52:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.434 11:52:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.434 11:52:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.434 11:52:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:36.434 11:52:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:36.434 11:52:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:36.434 11:52:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:36.434 11:52:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:36.434 11:52:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:36.434 11:52:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.434 11:52:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.435 11:52:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:36.435 11:52:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:36.435 11:52:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:36.435 11:52:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:36.435 11:52:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:36.435 11:52:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.435 11:52:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:36.435 11:52:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:36.435 11:52:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:36.435 11:52:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:36.435 11:52:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:36.435 11:52:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:36.435 Cannot find device "nvmf_tgt_br" 00:24:36.435 11:52:09 -- nvmf/common.sh@154 -- # true 00:24:36.435 11:52:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:36.435 Cannot find device "nvmf_tgt_br2" 00:24:36.435 11:52:09 -- nvmf/common.sh@155 -- # true 00:24:36.435 11:52:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:36.435 11:52:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:36.435 Cannot find device "nvmf_tgt_br" 00:24:36.435 11:52:09 -- nvmf/common.sh@157 -- # true 00:24:36.435 11:52:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:36.435 Cannot find device "nvmf_tgt_br2" 00:24:36.435 11:52:09 -- nvmf/common.sh@158 -- # true 00:24:36.435 11:52:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:36.435 11:52:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:36.435 11:52:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.435 11:52:09 -- nvmf/common.sh@161 -- # true 00:24:36.435 11:52:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.435 11:52:09 -- nvmf/common.sh@162 -- # true 00:24:36.435 11:52:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:36.435 11:52:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:36.694 11:52:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:36.694 11:52:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:36.694 11:52:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:36.694 11:52:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:36.694 11:52:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:36.694 11:52:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:36.694 11:52:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:36.694 11:52:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:36.694 11:52:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:36.694 11:52:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:36.694 11:52:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:36.694 11:52:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:36.694 11:52:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:36.694 11:52:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:36.694 11:52:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:36.694 11:52:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:36.694 11:52:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:36.694 11:52:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:36.694 11:52:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:36.694 11:52:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:36.694 11:52:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:36.694 11:52:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:36.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:24:36.694 00:24:36.694 --- 10.0.0.2 ping statistics --- 00:24:36.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.695 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:36.695 11:52:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:36.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:36.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:24:36.695 00:24:36.695 --- 10.0.0.3 ping statistics --- 00:24:36.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.695 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:36.695 11:52:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:36.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:24:36.695 00:24:36.695 --- 10.0.0.1 ping statistics --- 00:24:36.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.695 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:36.695 11:52:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.695 11:52:09 -- nvmf/common.sh@421 -- # return 0 00:24:36.695 11:52:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:36.695 11:52:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.695 11:52:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:36.695 11:52:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:36.695 11:52:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.695 11:52:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:36.695 11:52:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:36.695 11:52:09 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:36.695 11:52:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.695 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.695 11:52:09 -- host/identify.sh@19 -- # nvmfpid=83002 00:24:36.695 11:52:09 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:36.695 11:52:09 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.695 11:52:09 -- host/identify.sh@23 -- # waitforlisten 83002 00:24:36.695 11:52:09 -- common/autotest_common.sh@829 -- # '[' -z 83002 ']' 00:24:36.695 11:52:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.695 11:52:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.695 11:52:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.695 11:52:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.695 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.695 [2024-11-20 11:52:09.729486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:36.695 [2024-11-20 11:52:09.729545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.954 [2024-11-20 11:52:09.866432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.954 [2024-11-20 11:52:09.946117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:36.955 [2024-11-20 11:52:09.946255] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.955 [2024-11-20 11:52:09.946262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.955 [2024-11-20 11:52:09.946267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.955 [2024-11-20 11:52:09.946474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.955 [2024-11-20 11:52:09.946876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.955 [2024-11-20 11:52:09.946822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.955 [2024-11-20 11:52:09.946883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.523 11:52:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.523 11:52:10 -- common/autotest_common.sh@862 -- # return 0 00:24:37.523 11:52:10 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.523 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 [2024-11-20 11:52:10.570466] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.783 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.783 11:52:10 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:37.783 11:52:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.783 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 11:52:10 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.783 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.783 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 Malloc0 00:24:37.783 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.783 11:52:10 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.783 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.783 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.783 11:52:10 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:37.783 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.783 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.783 11:52:10 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.783 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.783 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 [2024-11-20 11:52:10.697984] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.783 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.783 11:52:10 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:37.783 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.783 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.783 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.783 11:52:10 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:37.783 11:52:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.784 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.784 [2024-11-20 11:52:10.721741] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:37.784 [ 00:24:37.784 { 00:24:37.784 "allow_any_host": true, 00:24:37.784 "hosts": [], 00:24:37.784 "listen_addresses": [ 00:24:37.784 { 00:24:37.784 "adrfam": "IPv4", 00:24:37.784 "traddr": "10.0.0.2", 00:24:37.784 "transport": "TCP", 00:24:37.784 "trsvcid": "4420", 00:24:37.784 "trtype": "TCP" 00:24:37.784 } 00:24:37.784 ], 00:24:37.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:37.784 "subtype": "Discovery" 00:24:37.784 }, 00:24:37.784 { 00:24:37.784 "allow_any_host": true, 00:24:37.784 "hosts": [], 00:24:37.784 "listen_addresses": [ 00:24:37.784 { 00:24:37.784 "adrfam": "IPv4", 00:24:37.784 "traddr": "10.0.0.2", 00:24:37.784 "transport": "TCP", 00:24:37.784 "trsvcid": "4420", 00:24:37.784 "trtype": "TCP" 00:24:37.784 } 00:24:37.784 ], 00:24:37.784 "max_cntlid": 65519, 00:24:37.784 "max_namespaces": 32, 00:24:37.784 "min_cntlid": 1, 00:24:37.784 "model_number": "SPDK bdev Controller", 00:24:37.784 "namespaces": [ 00:24:37.784 { 00:24:37.784 "bdev_name": "Malloc0", 00:24:37.784 "eui64": "ABCDEF0123456789", 00:24:37.784 "name": "Malloc0", 00:24:37.784 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:37.784 "nsid": 1, 00:24:37.784 "uuid": "8798e372-67f2-4230-94b4-d94b70f937f6" 00:24:37.784 } 00:24:37.784 ], 00:24:37.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.784 "serial_number": "SPDK00000000000001", 00:24:37.784 "subtype": "NVMe" 00:24:37.784 } 00:24:37.784 ] 00:24:37.784 11:52:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.784 11:52:10 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:37.784 [2024-11-20 11:52:10.765264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:37.784 [2024-11-20 11:52:10.765312] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83058 ] 00:24:38.048 [2024-11-20 11:52:10.892319] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:38.048 [2024-11-20 11:52:10.892381] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:38.048 [2024-11-20 11:52:10.892385] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:38.048 [2024-11-20 11:52:10.892392] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:38.048 [2024-11-20 11:52:10.892398] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:38.048 [2024-11-20 11:52:10.892500] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:38.048 [2024-11-20 11:52:10.892533] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1af0d30 0 00:24:38.048 [2024-11-20 11:52:10.896682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:38.048 [2024-11-20 11:52:10.896699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:38.048 [2024-11-20 11:52:10.896703] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:38.048 [2024-11-20 11:52:10.896704] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:38.048 [2024-11-20 11:52:10.896738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.896742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.896745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.048 [2024-11-20 11:52:10.896756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:38.048 [2024-11-20 11:52:10.896775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.048 [2024-11-20 11:52:10.904663] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.048 [2024-11-20 11:52:10.904676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.048 [2024-11-20 11:52:10.904679] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.048 [2024-11-20 11:52:10.904690] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:38.048 [2024-11-20 11:52:10.904711] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:38.048 [2024-11-20 11:52:10.904715] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:38.048 [2024-11-20 11:52:10.904725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.048 [2024-11-20 11:52:10.904735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.048 [2024-11-20 11:52:10.904752] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.048 [2024-11-20 11:52:10.904801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.048 [2024-11-20 11:52:10.904805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.048 [2024-11-20 11:52:10.904808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904810] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.048 [2024-11-20 11:52:10.904814] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:38.048 [2024-11-20 11:52:10.904818] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:38.048 [2024-11-20 11:52:10.904823] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904828] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.048 [2024-11-20 11:52:10.904832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.048 [2024-11-20 11:52:10.904843] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.048 [2024-11-20 11:52:10.904880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.048 [2024-11-20 11:52:10.904884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.048 [2024-11-20 11:52:10.904886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.048 [2024-11-20 11:52:10.904893] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:38.048 [2024-11-20 11:52:10.904898] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:38.048 [2024-11-20 11:52:10.904902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.048 [2024-11-20 11:52:10.904906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.048 [2024-11-20 11:52:10.904911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.048 [2024-11-20 11:52:10.904921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.048 [2024-11-20 11:52:10.904965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.048 [2024-11-20 11:52:10.904969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 11:52:10.904971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.904974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.049 [2024-11-20 11:52:10.904977] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:38.049 [2024-11-20 11:52:10.904983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.904986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.904988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.904992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 11:52:10.905002] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.049 [2024-11-20 11:52:10.905040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 11:52:10.905045] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 11:52:10.905047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.049 [2024-11-20 11:52:10.905052] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:38.049 [2024-11-20 11:52:10.905055] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:38.049 [2024-11-20 11:52:10.905060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:38.049 [2024-11-20 11:52:10.905163] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:38.049 [2024-11-20 11:52:10.905170] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:38.049 [2024-11-20 11:52:10.905176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 11:52:10.905195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.049 [2024-11-20 11:52:10.905239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 11:52:10.905243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 11:52:10.905245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.049 [2024-11-20 11:52:10.905251] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:38.049 [2024-11-20 11:52:10.905257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 11:52:10.905276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.049 [2024-11-20 11:52:10.905315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 11:52:10.905320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 11:52:10.905322] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.049 [2024-11-20 11:52:10.905328] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:38.049 [2024-11-20 11:52:10.905330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:38.049 [2024-11-20 11:52:10.905335] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:38.049 [2024-11-20 11:52:10.905345] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:38.049 [2024-11-20 11:52:10.905352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905355] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 11:52:10.905372] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.049 [2024-11-20 11:52:10.905445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.049 [2024-11-20 11:52:10.905449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.049 [2024-11-20 11:52:10.905452] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905455] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af0d30): datao=0, datal=4096, cccid=0 00:24:38.049 [2024-11-20 11:52:10.905458] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4ef30) on tqpair(0x1af0d30): expected_datao=0, payload_size=4096 00:24:38.049 [2024-11-20 11:52:10.905464] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905467] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 11:52:10.905477] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 11:52:10.905479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.049 [2024-11-20 11:52:10.905488] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:38.049 [2024-11-20 11:52:10.905491] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:38.049 [2024-11-20 11:52:10.905493] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:38.049 [2024-11-20 11:52:10.905496] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:38.049 [2024-11-20 11:52:10.905499] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:38.049 [2024-11-20 11:52:10.905502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:38.049 [2024-11-20 11:52:10.905509] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:38.049 [2024-11-20 11:52:10.905514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.049 [2024-11-20 11:52:10.905534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.049 [2024-11-20 11:52:10.905577] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.049 [2024-11-20 11:52:10.905581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.049 [2024-11-20 11:52:10.905583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4ef30) on tqpair=0x1af0d30 00:24:38.049 [2024-11-20 11:52:10.905591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.049 [2024-11-20 11:52:10.905603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905607] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.049 [2024-11-20 11:52:10.905615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905619] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.049 [2024-11-20 11:52:10.905627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905629] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.049 [2024-11-20 11:52:10.905638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:38.049 [2024-11-20 11:52:10.905645] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:38.049 [2024-11-20 11:52:10.905649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905651] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.049 [2024-11-20 11:52:10.905662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af0d30) 00:24:38.049 [2024-11-20 11:52:10.905667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.049 [2024-11-20 11:52:10.905679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef30, cid 0, qid 0 00:24:38.049 [2024-11-20 11:52:10.905683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f090, cid 1, qid 0 00:24:38.050 [2024-11-20 11:52:10.905686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f1f0, cid 2, qid 0 00:24:38.050 [2024-11-20 11:52:10.905689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.050 [2024-11-20 11:52:10.905692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f4b0, cid 4, qid 0 00:24:38.050 [2024-11-20 11:52:10.905772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 11:52:10.905776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 11:52:10.905779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f4b0) on tqpair=0x1af0d30 00:24:38.050 [2024-11-20 11:52:10.905786] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:38.050 [2024-11-20 11:52:10.905789] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:38.050 [2024-11-20 11:52:10.905796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905799] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af0d30) 00:24:38.050 [2024-11-20 11:52:10.905805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 11:52:10.905814] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f4b0, cid 4, qid 0 00:24:38.050 [2024-11-20 11:52:10.905856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.050 [2024-11-20 11:52:10.905860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.050 [2024-11-20 11:52:10.905863] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905865] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af0d30): datao=0, datal=4096, cccid=4 00:24:38.050 [2024-11-20 11:52:10.905867] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f4b0) on tqpair(0x1af0d30): expected_datao=0, payload_size=4096 00:24:38.050 [2024-11-20 11:52:10.905872] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905874] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 11:52:10.905884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 11:52:10.905886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f4b0) on tqpair=0x1af0d30 00:24:38.050 [2024-11-20 11:52:10.905896] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:38.050 [2024-11-20 11:52:10.905913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af0d30) 00:24:38.050 [2024-11-20 11:52:10.905922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 11:52:10.905927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.905931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1af0d30) 00:24:38.050 [2024-11-20 11:52:10.905935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.050 [2024-11-20 11:52:10.905948] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f4b0, cid 4, qid 0 00:24:38.050 [2024-11-20 11:52:10.905952] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f610, cid 5, qid 0 00:24:38.050 [2024-11-20 11:52:10.906024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.050 [2024-11-20 11:52:10.906028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.050 [2024-11-20 11:52:10.906030] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.906033] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af0d30): datao=0, datal=1024, cccid=4 00:24:38.050 [2024-11-20 11:52:10.906036] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f4b0) on tqpair(0x1af0d30): expected_datao=0, payload_size=1024 00:24:38.050 [2024-11-20 11:52:10.906041] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.906043] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.906047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 11:52:10.906051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 11:52:10.906053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.906055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f610) on tqpair=0x1af0d30 00:24:38.050 [2024-11-20 11:52:10.947702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 11:52:10.947716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 11:52:10.947718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f4b0) on tqpair=0x1af0d30 00:24:38.050 [2024-11-20 11:52:10.947735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af0d30) 00:24:38.050 [2024-11-20 11:52:10.947745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 11:52:10.947763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f4b0, cid 4, qid 0 00:24:38.050 [2024-11-20 11:52:10.947818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.050 [2024-11-20 11:52:10.947822] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.050 [2024-11-20 11:52:10.947824] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947827] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af0d30): datao=0, datal=3072, cccid=4 00:24:38.050 [2024-11-20 11:52:10.947829] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f4b0) on tqpair(0x1af0d30): expected_datao=0, payload_size=3072 00:24:38.050 [2024-11-20 11:52:10.947835] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947837] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.050 [2024-11-20 11:52:10.947846] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.050 [2024-11-20 11:52:10.947848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f4b0) on tqpair=0x1af0d30 00:24:38.050 [2024-11-20 11:52:10.947857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af0d30) 00:24:38.050 [2024-11-20 11:52:10.947866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.050 [2024-11-20 11:52:10.947879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f4b0, cid 4, qid 0 00:24:38.050 [2024-11-20 11:52:10.947923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.050 [2024-11-20 11:52:10.947927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.050 [2024-11-20 11:52:10.947929] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947932] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af0d30): datao=0, datal=8, cccid=4 00:24:38.050 [2024-11-20 11:52:10.947934] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f4b0) on tqpair(0x1af0d30): expected_datao=0, payload_size=8 00:24:38.050 [2024-11-20 11:52:10.947939] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.050 [2024-11-20 11:52:10.947941] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.050 ===================================================== 00:24:38.050 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:38.050 ===================================================== 00:24:38.050 Controller Capabilities/Features 00:24:38.050 ================================ 00:24:38.050 Vendor ID: 0000 00:24:38.050 Subsystem Vendor ID: 0000 00:24:38.050 Serial Number: .................... 00:24:38.050 Model Number: ........................................ 00:24:38.050 Firmware Version: 24.01.1 00:24:38.050 Recommended Arb Burst: 0 00:24:38.050 IEEE OUI Identifier: 00 00 00 00:24:38.050 Multi-path I/O 00:24:38.050 May have multiple subsystem ports: No 00:24:38.050 May have multiple controllers: No 00:24:38.050 Associated with SR-IOV VF: No 00:24:38.050 Max Data Transfer Size: 131072 00:24:38.050 Max Number of Namespaces: 0 00:24:38.050 Max Number of I/O Queues: 1024 00:24:38.050 NVMe Specification Version (VS): 1.3 00:24:38.050 NVMe Specification Version (Identify): 1.3 00:24:38.050 Maximum Queue Entries: 128 00:24:38.050 Contiguous Queues Required: Yes 00:24:38.050 Arbitration Mechanisms Supported 00:24:38.050 Weighted Round Robin: Not Supported 00:24:38.050 Vendor Specific: Not Supported 00:24:38.050 Reset Timeout: 15000 ms 00:24:38.050 Doorbell Stride: 4 bytes 00:24:38.050 NVM Subsystem Reset: Not Supported 00:24:38.050 Command Sets Supported 00:24:38.050 NVM Command Set: Supported 00:24:38.050 Boot Partition: Not Supported 00:24:38.050 Memory Page Size Minimum: 4096 bytes 00:24:38.050 Memory Page Size Maximum: 4096 bytes 00:24:38.050 Persistent Memory Region: Not Supported 00:24:38.050 Optional Asynchronous Events Supported 00:24:38.050 Namespace Attribute Notices: Not Supported 00:24:38.050 Firmware Activation Notices: Not Supported 00:24:38.050 ANA Change Notices: Not Supported 00:24:38.050 PLE Aggregate Log Change Notices: Not Supported 00:24:38.050 LBA Status Info Alert Notices: Not Supported 00:24:38.050 EGE Aggregate Log Change Notices: Not Supported 00:24:38.050 Normal NVM Subsystem Shutdown event: Not Supported 00:24:38.051 Zone Descriptor Change Notices: Not Supported 00:24:38.051 Discovery Log Change Notices: Supported 00:24:38.051 Controller Attributes 00:24:38.051 128-bit Host Identifier: Not Supported 00:24:38.051 Non-Operational Permissive Mode: Not Supported 00:24:38.051 NVM Sets: Not Supported 00:24:38.051 Read Recovery Levels: Not Supported 00:24:38.051 Endurance Groups: Not Supported 00:24:38.051 Predictable Latency Mode: Not Supported 00:24:38.051 Traffic Based Keep ALive: Not Supported 00:24:38.051 Namespace Granularity: Not Supported 00:24:38.051 SQ Associations: Not Supported 00:24:38.051 UUID List: Not Supported 00:24:38.051 Multi-Domain Subsystem: Not Supported 00:24:38.051 Fixed Capacity Management: Not Supported 00:24:38.051 Variable Capacity Management: Not Supported 00:24:38.051 Delete Endurance Group: Not Supported 00:24:38.051 Delete NVM Set: Not Supported 00:24:38.051 Extended LBA Formats Supported: Not Supported 00:24:38.051 Flexible Data Placement Supported: Not Supported 00:24:38.051 00:24:38.051 Controller Memory Buffer Support 00:24:38.051 ================================ 00:24:38.051 Supported: No 00:24:38.051 00:24:38.051 Persistent Memory Region Support 00:24:38.051 ================================ 00:24:38.051 Supported: No 00:24:38.051 00:24:38.051 Admin Command Set Attributes 00:24:38.051 ============================ 00:24:38.051 Security Send/Receive: Not Supported 00:24:38.051 Format NVM: Not Supported 00:24:38.051 Firmware Activate/Download: Not Supported 00:24:38.051 Namespace Management: Not Supported 00:24:38.051 Device Self-Test: Not Supported 00:24:38.051 Directives: Not Supported 00:24:38.051 NVMe-MI: Not Supported 00:24:38.051 Virtualization Management: Not Supported 00:24:38.051 Doorbell Buffer Config: Not Supported 00:24:38.051 Get LBA Status Capability: Not Supported 00:24:38.051 Command & Feature Lockdown Capability: Not Supported 00:24:38.051 Abort Command Limit: 1 00:24:38.051 Async Event Request Limit: 4 00:24:38.051 Number of Firmware Slots: N/A 00:24:38.051 Firmware Slot 1 Read-Only: N/A 00:24:38.051 Firmware Activation Without Reset: N/A 00:24:38.051 Multiple Update Detection Support: N/A 00:24:38.051 Firmware Update Granularity: No Information Provided 00:24:38.051 Per-Namespace SMART Log: No 00:24:38.051 Asymmetric Namespace Access Log Page: Not Supported 00:24:38.051 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:38.051 Command Effects Log Page: Not Supported 00:24:38.051 Get Log Page Extended Data: Supported 00:24:38.051 Telemetry Log Pages: Not Supported 00:24:38.051 Persistent Event Log Pages: Not Supported 00:24:38.051 Supported Log Pages Log Page: May Support 00:24:38.051 Commands Supported & Effects Log Page: Not Supported 00:24:38.051 Feature Identifiers & Effects Log Page:May Support 00:24:38.051 NVMe-MI Commands & Effects Log Page: May Support 00:24:38.051 Data Area 4 for Telemetry Log: Not Supported 00:24:38.051 Error Log Page Entries Supported: 128 00:24:38.051 Keep Alive: Not Supported 00:24:38.051 00:24:38.051 NVM Command Set Attributes 00:24:38.051 ========================== 00:24:38.051 Submission Queue Entry Size 00:24:38.051 Max: 1 00:24:38.051 Min: 1 00:24:38.051 Completion Queue Entry Size 00:24:38.051 Max: 1 00:24:38.051 Min: 1 00:24:38.051 Number of Namespaces: 0 00:24:38.051 Compare Command: Not Supported 00:24:38.051 Write Uncorrectable Command: Not Supported 00:24:38.051 Dataset Management Command: Not Supported 00:24:38.051 Write Zeroes Command: Not Supported 00:24:38.051 Set Features Save Field: Not Supported 00:24:38.051 Reservations: Not Supported 00:24:38.051 Timestamp: Not Supported 00:24:38.051 Copy: Not Supported 00:24:38.051 Volatile Write Cache: Not Present 00:24:38.051 Atomic Write Unit (Normal): 1 00:24:38.051 Atomic Write Unit (PFail): 1 00:24:38.051 Atomic Compare & Write Unit: 1 00:24:38.051 Fused Compare & Write: Supported 00:24:38.051 Scatter-Gather List 00:24:38.051 SGL Command Set: Supported 00:24:38.051 SGL Keyed: Supported 00:24:38.051 SGL Bit Bucket Descriptor: Not Supported 00:24:38.051 SGL Metadata Pointer: Not Supported 00:24:38.051 Oversized SGL: Not Supported 00:24:38.051 SGL Metadata Address: Not Supported 00:24:38.051 SGL Offset: Supported 00:24:38.051 Transport SGL Data Block: Not Supported 00:24:38.051 Replay Protected Memory Block: Not Supported 00:24:38.051 00:24:38.051 Firmware Slot Information 00:24:38.051 ========================= 00:24:38.051 Active slot: 0 00:24:38.051 00:24:38.051 00:24:38.051 Error Log 00:24:38.051 ========= 00:24:38.051 00:24:38.051 Active Namespaces 00:24:38.051 ================= 00:24:38.051 Discovery Log Page 00:24:38.051 ================== 00:24:38.051 Generation Counter: 2 00:24:38.051 Number of Records: 2 00:24:38.051 Record Format: 0 00:24:38.051 00:24:38.051 Discovery Log Entry 0 00:24:38.051 ---------------------- 00:24:38.051 Transport Type: 3 (TCP) 00:24:38.051 Address Family: 1 (IPv4) 00:24:38.051 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:38.051 Entry Flags: 00:24:38.051 Duplicate Returned Information: 1 00:24:38.051 Explicit Persistent Connection Support for Discovery: 1 00:24:38.051 Transport Requirements: 00:24:38.051 Secure Channel: Not Required 00:24:38.051 Port ID: 0 (0x0000) 00:24:38.051 Controller ID: 65535 (0xffff) 00:24:38.051 Admin Max SQ Size: 128 00:24:38.051 Transport Service Identifier: 4420 00:24:38.051 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:38.051 Transport Address: 10.0.0.2 00:24:38.051 Discovery Log Entry 1 00:24:38.051 ---------------------- 00:24:38.051 Transport Type: 3 (TCP) 00:24:38.051 Address Family: 1 (IPv4) 00:24:38.051 Subsystem Type: 2 (NVM Subsystem) 00:24:38.051 Entry Flags: 00:24:38.051 Duplicate Returned Information: 0 00:24:38.051 Explicit Persistent Connection Support for Discovery: 0 00:24:38.051 Transport Requirements: 00:24:38.051 Secure Channel: Not Required 00:24:38.051 Port ID: 0 (0x0000) 00:24:38.051 Controller ID: 65535 (0xffff) 00:24:38.051 Admin Max SQ Size: 128 00:24:38.051 Transport Service Identifier: 4420 00:24:38.051 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:38.051 Transport Address: 10.0.0.2 [2024-11-20 11:52:10.989706] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.051 [2024-11-20 11:52:10.989723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.051 [2024-11-20 11:52:10.989726] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.051 [2024-11-20 11:52:10.989729] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f4b0) on tqpair=0x1af0d30 00:24:38.051 [2024-11-20 11:52:10.989844] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:38.051 [2024-11-20 11:52:10.989856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.051 [2024-11-20 11:52:10.989862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.051 [2024-11-20 11:52:10.989866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.051 [2024-11-20 11:52:10.989870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.051 [2024-11-20 11:52:10.989876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.051 [2024-11-20 11:52:10.989879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.051 [2024-11-20 11:52:10.989881] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.051 [2024-11-20 11:52:10.989887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.051 [2024-11-20 11:52:10.989905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.051 [2024-11-20 11:52:10.989955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.051 [2024-11-20 11:52:10.989959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.051 [2024-11-20 11:52:10.989961] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.051 [2024-11-20 11:52:10.989964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.051 [2024-11-20 11:52:10.989969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.051 [2024-11-20 11:52:10.989971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.051 [2024-11-20 11:52:10.989974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.051 [2024-11-20 11:52:10.989978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.051 [2024-11-20 11:52:10.989991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.051 [2024-11-20 11:52:10.990044] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.051 [2024-11-20 11:52:10.990048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.051 [2024-11-20 11:52:10.990050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990052] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990056] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:38.052 [2024-11-20 11:52:10.990059] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:38.052 [2024-11-20 11:52:10.990065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990069] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990144] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990200] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990276] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990301] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990348] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990428] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990430] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990446] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990486] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990490] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990494] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990572] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.990591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.990627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.990632] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.990634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990636] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.990643] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.990647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af0d30) 00:24:38.052 [2024-11-20 11:52:10.990652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.052 [2024-11-20 11:52:10.994685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f350, cid 3, qid 0 00:24:38.052 [2024-11-20 11:52:10.994726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.052 [2024-11-20 11:52:10.994731] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.052 [2024-11-20 11:52:10.994733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.052 [2024-11-20 11:52:10.994735] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b4f350) on tqpair=0x1af0d30 00:24:38.052 [2024-11-20 11:52:10.994741] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:38.052 00:24:38.052 11:52:11 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:38.052 [2024-11-20 11:52:11.036895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:38.052 [2024-11-20 11:52:11.036943] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83061 ] 00:24:38.327 [2024-11-20 11:52:11.166838] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:38.327 [2024-11-20 11:52:11.166897] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:38.327 [2024-11-20 11:52:11.166900] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:38.327 [2024-11-20 11:52:11.166908] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:38.327 [2024-11-20 11:52:11.166913] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:38.327 [2024-11-20 11:52:11.166991] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:38.327 [2024-11-20 11:52:11.167019] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1adbd30 0 00:24:38.327 [2024-11-20 11:52:11.174665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:38.327 [2024-11-20 11:52:11.174680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:38.327 [2024-11-20 11:52:11.174683] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:38.327 [2024-11-20 11:52:11.174685] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:38.327 [2024-11-20 11:52:11.174732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.174735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.174738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.327 [2024-11-20 11:52:11.174745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:38.327 [2024-11-20 11:52:11.174763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.327 [2024-11-20 11:52:11.182673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.327 [2024-11-20 11:52:11.182686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.327 [2024-11-20 11:52:11.182688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.182691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.327 [2024-11-20 11:52:11.182698] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:38.327 [2024-11-20 11:52:11.182702] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:38.327 [2024-11-20 11:52:11.182722] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:38.327 [2024-11-20 11:52:11.182730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.182733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.182735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.327 [2024-11-20 11:52:11.182740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.327 [2024-11-20 11:52:11.182757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.327 [2024-11-20 11:52:11.182808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.327 [2024-11-20 11:52:11.182812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.327 [2024-11-20 11:52:11.182814] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.182816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.327 [2024-11-20 11:52:11.182820] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:38.327 [2024-11-20 11:52:11.182824] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:38.327 [2024-11-20 11:52:11.182829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.182832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.327 [2024-11-20 11:52:11.182834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.327 [2024-11-20 11:52:11.182838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.327 [2024-11-20 11:52:11.182849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.327 [2024-11-20 11:52:11.182893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.327 [2024-11-20 11:52:11.182898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.327 [2024-11-20 11:52:11.182900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.182902] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.182906] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:38.328 [2024-11-20 11:52:11.182911] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:38.328 [2024-11-20 11:52:11.182915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.182917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.182919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.182923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.328 [2024-11-20 11:52:11.182933] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.328 [2024-11-20 11:52:11.182972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.328 [2024-11-20 11:52:11.182976] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.328 [2024-11-20 11:52:11.182978] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.182980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.182984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:38.328 [2024-11-20 11:52:11.182990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.182993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.182995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.182999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.328 [2024-11-20 11:52:11.183009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.328 [2024-11-20 11:52:11.183053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.328 [2024-11-20 11:52:11.183057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.328 [2024-11-20 11:52:11.183059] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.183065] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:38.328 [2024-11-20 11:52:11.183068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:38.328 [2024-11-20 11:52:11.183072] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:38.328 [2024-11-20 11:52:11.183175] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:38.328 [2024-11-20 11:52:11.183191] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:38.328 [2024-11-20 11:52:11.183197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183199] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183202] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.328 [2024-11-20 11:52:11.183217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.328 [2024-11-20 11:52:11.183262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.328 [2024-11-20 11:52:11.183267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.328 [2024-11-20 11:52:11.183269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.183274] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:38.328 [2024-11-20 11:52:11.183280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183284] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.328 [2024-11-20 11:52:11.183299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.328 [2024-11-20 11:52:11.183352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.328 [2024-11-20 11:52:11.183356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.328 [2024-11-20 11:52:11.183358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.183364] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:38.328 [2024-11-20 11:52:11.183366] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:38.328 [2024-11-20 11:52:11.183372] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:38.328 [2024-11-20 11:52:11.183381] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:38.328 [2024-11-20 11:52:11.183388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.328 [2024-11-20 11:52:11.183408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.328 [2024-11-20 11:52:11.183511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.328 [2024-11-20 11:52:11.183517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.328 [2024-11-20 11:52:11.183519] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183522] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=4096, cccid=0 00:24:38.328 [2024-11-20 11:52:11.183525] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b39f30) on tqpair(0x1adbd30): expected_datao=0, payload_size=4096 00:24:38.328 [2024-11-20 11:52:11.183530] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183532] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.328 [2024-11-20 11:52:11.183546] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.328 [2024-11-20 11:52:11.183548] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.183556] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:38.328 [2024-11-20 11:52:11.183559] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:38.328 [2024-11-20 11:52:11.183562] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:38.328 [2024-11-20 11:52:11.183564] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:38.328 [2024-11-20 11:52:11.183573] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:38.328 [2024-11-20 11:52:11.183576] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:38.328 [2024-11-20 11:52:11.183584] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:38.328 [2024-11-20 11:52:11.183588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183591] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.328 [2024-11-20 11:52:11.183609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.328 [2024-11-20 11:52:11.183669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.328 [2024-11-20 11:52:11.183673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.328 [2024-11-20 11:52:11.183676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b39f30) on tqpair=0x1adbd30 00:24:38.328 [2024-11-20 11:52:11.183683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183686] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.328 [2024-11-20 11:52:11.183695] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.328 [2024-11-20 11:52:11.183707] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.328 [2024-11-20 11:52:11.183718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.328 [2024-11-20 11:52:11.183722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.328 [2024-11-20 11:52:11.183727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.329 [2024-11-20 11:52:11.183730] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.183738] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.183742] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.183744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.183746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.329 [2024-11-20 11:52:11.183750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.329 [2024-11-20 11:52:11.183762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b39f30, cid 0, qid 0 00:24:38.329 [2024-11-20 11:52:11.183766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a090, cid 1, qid 0 00:24:38.329 [2024-11-20 11:52:11.183769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a1f0, cid 2, qid 0 00:24:38.329 [2024-11-20 11:52:11.183772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.329 [2024-11-20 11:52:11.183775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.329 [2024-11-20 11:52:11.183878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.329 [2024-11-20 11:52:11.183882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.329 [2024-11-20 11:52:11.183884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.183887] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.329 [2024-11-20 11:52:11.183890] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:38.329 [2024-11-20 11:52:11.183893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.183898] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.183905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.183909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.183912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.183914] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.329 [2024-11-20 11:52:11.183918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.329 [2024-11-20 11:52:11.183928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.329 [2024-11-20 11:52:11.183982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.329 [2024-11-20 11:52:11.183986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.329 [2024-11-20 11:52:11.183988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.183990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.329 [2024-11-20 11:52:11.184036] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.329 [2024-11-20 11:52:11.184056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.329 [2024-11-20 11:52:11.184066] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.329 [2024-11-20 11:52:11.184128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.329 [2024-11-20 11:52:11.184132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.329 [2024-11-20 11:52:11.184134] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184137] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=4096, cccid=4 00:24:38.329 [2024-11-20 11:52:11.184139] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a4b0) on tqpair(0x1adbd30): expected_datao=0, payload_size=4096 00:24:38.329 [2024-11-20 11:52:11.184144] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184147] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.329 [2024-11-20 11:52:11.184156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.329 [2024-11-20 11:52:11.184158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.329 [2024-11-20 11:52:11.184170] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:38.329 [2024-11-20 11:52:11.184176] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184186] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.329 [2024-11-20 11:52:11.184195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.329 [2024-11-20 11:52:11.184206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.329 [2024-11-20 11:52:11.184280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.329 [2024-11-20 11:52:11.184285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.329 [2024-11-20 11:52:11.184287] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184289] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=4096, cccid=4 00:24:38.329 [2024-11-20 11:52:11.184291] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a4b0) on tqpair(0x1adbd30): expected_datao=0, payload_size=4096 00:24:38.329 [2024-11-20 11:52:11.184296] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184298] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.329 [2024-11-20 11:52:11.184310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.329 [2024-11-20 11:52:11.184312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.329 [2024-11-20 11:52:11.184325] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184331] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184335] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.329 [2024-11-20 11:52:11.184344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.329 [2024-11-20 11:52:11.184354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.329 [2024-11-20 11:52:11.184410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.329 [2024-11-20 11:52:11.184415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.329 [2024-11-20 11:52:11.184417] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184419] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=4096, cccid=4 00:24:38.329 [2024-11-20 11:52:11.184421] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a4b0) on tqpair(0x1adbd30): expected_datao=0, payload_size=4096 00:24:38.329 [2024-11-20 11:52:11.184426] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184428] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184437] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.329 [2024-11-20 11:52:11.184442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.329 [2024-11-20 11:52:11.184444] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.329 [2024-11-20 11:52:11.184451] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184456] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184463] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184470] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184473] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:38.329 [2024-11-20 11:52:11.184475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:38.329 [2024-11-20 11:52:11.184479] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:38.329 [2024-11-20 11:52:11.184488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184490] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.329 [2024-11-20 11:52:11.184497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.329 [2024-11-20 11:52:11.184501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.329 [2024-11-20 11:52:11.184505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.330 [2024-11-20 11:52:11.184523] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.330 [2024-11-20 11:52:11.184526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a610, cid 5, qid 0 00:24:38.330 [2024-11-20 11:52:11.184594] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.184598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.184600] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.184607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.184611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.184613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a610) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.184622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a610, cid 5, qid 0 00:24:38.330 [2024-11-20 11:52:11.184709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.184713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.184716] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a610) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.184724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184743] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a610, cid 5, qid 0 00:24:38.330 [2024-11-20 11:52:11.184804] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.184808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.184810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a610) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.184819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184821] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a610, cid 5, qid 0 00:24:38.330 [2024-11-20 11:52:11.184883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.184888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.184890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a610) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.184900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184903] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184940] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.184944] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1adbd30) 00:24:38.330 [2024-11-20 11:52:11.184948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.330 [2024-11-20 11:52:11.184959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a610, cid 5, qid 0 00:24:38.330 [2024-11-20 11:52:11.184963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a4b0, cid 4, qid 0 00:24:38.330 [2024-11-20 11:52:11.184965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a770, cid 6, qid 0 00:24:38.330 [2024-11-20 11:52:11.184968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a8d0, cid 7, qid 0 00:24:38.330 [2024-11-20 11:52:11.185102] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.330 [2024-11-20 11:52:11.185114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.330 [2024-11-20 11:52:11.185116] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185118] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=8192, cccid=5 00:24:38.330 [2024-11-20 11:52:11.185121] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a610) on tqpair(0x1adbd30): expected_datao=0, payload_size=8192 00:24:38.330 [2024-11-20 11:52:11.185132] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185134] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.330 [2024-11-20 11:52:11.185142] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.330 [2024-11-20 11:52:11.185144] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185146] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=512, cccid=4 00:24:38.330 [2024-11-20 11:52:11.185149] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a4b0) on tqpair(0x1adbd30): expected_datao=0, payload_size=512 00:24:38.330 [2024-11-20 11:52:11.185154] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185156] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.330 [2024-11-20 11:52:11.185163] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.330 [2024-11-20 11:52:11.185165] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185167] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=512, cccid=6 00:24:38.330 [2024-11-20 11:52:11.185170] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a770) on tqpair(0x1adbd30): expected_datao=0, payload_size=512 00:24:38.330 [2024-11-20 11:52:11.185174] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185177] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.330 [2024-11-20 11:52:11.185184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.330 [2024-11-20 11:52:11.185186] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185188] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1adbd30): datao=0, datal=4096, cccid=7 00:24:38.330 [2024-11-20 11:52:11.185190] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b3a8d0) on tqpair(0x1adbd30): expected_datao=0, payload_size=4096 00:24:38.330 [2024-11-20 11:52:11.185195] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185197] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.185206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.185208] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185210] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a610) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.185223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.185227] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.185230] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a4b0) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.185239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.185243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.185245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a770) on tqpair=0x1adbd30 00:24:38.330 [2024-11-20 11:52:11.185252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.330 [2024-11-20 11:52:11.185256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.330 [2024-11-20 11:52:11.185258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.330 [2024-11-20 11:52:11.185260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a8d0) on tqpair=0x1adbd30 00:24:38.330 ===================================================== 00:24:38.330 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.330 ===================================================== 00:24:38.330 Controller Capabilities/Features 00:24:38.330 ================================ 00:24:38.331 Vendor ID: 8086 00:24:38.331 Subsystem Vendor ID: 8086 00:24:38.331 Serial Number: SPDK00000000000001 00:24:38.331 Model Number: SPDK bdev Controller 00:24:38.331 Firmware Version: 24.01.1 00:24:38.331 Recommended Arb Burst: 6 00:24:38.331 IEEE OUI Identifier: e4 d2 5c 00:24:38.331 Multi-path I/O 00:24:38.331 May have multiple subsystem ports: Yes 00:24:38.331 May have multiple controllers: Yes 00:24:38.331 Associated with SR-IOV VF: No 00:24:38.331 Max Data Transfer Size: 131072 00:24:38.331 Max Number of Namespaces: 32 00:24:38.331 Max Number of I/O Queues: 127 00:24:38.331 NVMe Specification Version (VS): 1.3 00:24:38.331 NVMe Specification Version (Identify): 1.3 00:24:38.331 Maximum Queue Entries: 128 00:24:38.331 Contiguous Queues Required: Yes 00:24:38.331 Arbitration Mechanisms Supported 00:24:38.331 Weighted Round Robin: Not Supported 00:24:38.331 Vendor Specific: Not Supported 00:24:38.331 Reset Timeout: 15000 ms 00:24:38.331 Doorbell Stride: 4 bytes 00:24:38.331 NVM Subsystem Reset: Not Supported 00:24:38.331 Command Sets Supported 00:24:38.331 NVM Command Set: Supported 00:24:38.331 Boot Partition: Not Supported 00:24:38.331 Memory Page Size Minimum: 4096 bytes 00:24:38.331 Memory Page Size Maximum: 4096 bytes 00:24:38.331 Persistent Memory Region: Not Supported 00:24:38.331 Optional Asynchronous Events Supported 00:24:38.331 Namespace Attribute Notices: Supported 00:24:38.331 Firmware Activation Notices: Not Supported 00:24:38.331 ANA Change Notices: Not Supported 00:24:38.331 PLE Aggregate Log Change Notices: Not Supported 00:24:38.331 LBA Status Info Alert Notices: Not Supported 00:24:38.331 EGE Aggregate Log Change Notices: Not Supported 00:24:38.331 Normal NVM Subsystem Shutdown event: Not Supported 00:24:38.331 Zone Descriptor Change Notices: Not Supported 00:24:38.331 Discovery Log Change Notices: Not Supported 00:24:38.331 Controller Attributes 00:24:38.331 128-bit Host Identifier: Supported 00:24:38.331 Non-Operational Permissive Mode: Not Supported 00:24:38.331 NVM Sets: Not Supported 00:24:38.331 Read Recovery Levels: Not Supported 00:24:38.331 Endurance Groups: Not Supported 00:24:38.331 Predictable Latency Mode: Not Supported 00:24:38.331 Traffic Based Keep ALive: Not Supported 00:24:38.331 Namespace Granularity: Not Supported 00:24:38.331 SQ Associations: Not Supported 00:24:38.331 UUID List: Not Supported 00:24:38.331 Multi-Domain Subsystem: Not Supported 00:24:38.331 Fixed Capacity Management: Not Supported 00:24:38.331 Variable Capacity Management: Not Supported 00:24:38.331 Delete Endurance Group: Not Supported 00:24:38.331 Delete NVM Set: Not Supported 00:24:38.331 Extended LBA Formats Supported: Not Supported 00:24:38.331 Flexible Data Placement Supported: Not Supported 00:24:38.331 00:24:38.331 Controller Memory Buffer Support 00:24:38.331 ================================ 00:24:38.331 Supported: No 00:24:38.331 00:24:38.331 Persistent Memory Region Support 00:24:38.331 ================================ 00:24:38.331 Supported: No 00:24:38.331 00:24:38.331 Admin Command Set Attributes 00:24:38.331 ============================ 00:24:38.331 Security Send/Receive: Not Supported 00:24:38.331 Format NVM: Not Supported 00:24:38.331 Firmware Activate/Download: Not Supported 00:24:38.331 Namespace Management: Not Supported 00:24:38.331 Device Self-Test: Not Supported 00:24:38.331 Directives: Not Supported 00:24:38.331 NVMe-MI: Not Supported 00:24:38.331 Virtualization Management: Not Supported 00:24:38.331 Doorbell Buffer Config: Not Supported 00:24:38.331 Get LBA Status Capability: Not Supported 00:24:38.331 Command & Feature Lockdown Capability: Not Supported 00:24:38.331 Abort Command Limit: 4 00:24:38.331 Async Event Request Limit: 4 00:24:38.331 Number of Firmware Slots: N/A 00:24:38.331 Firmware Slot 1 Read-Only: N/A 00:24:38.331 Firmware Activation Without Reset: N/A 00:24:38.331 Multiple Update Detection Support: N/A 00:24:38.331 Firmware Update Granularity: No Information Provided 00:24:38.331 Per-Namespace SMART Log: No 00:24:38.331 Asymmetric Namespace Access Log Page: Not Supported 00:24:38.331 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:38.331 Command Effects Log Page: Supported 00:24:38.331 Get Log Page Extended Data: Supported 00:24:38.331 Telemetry Log Pages: Not Supported 00:24:38.331 Persistent Event Log Pages: Not Supported 00:24:38.331 Supported Log Pages Log Page: May Support 00:24:38.331 Commands Supported & Effects Log Page: Not Supported 00:24:38.331 Feature Identifiers & Effects Log Page:May Support 00:24:38.331 NVMe-MI Commands & Effects Log Page: May Support 00:24:38.331 Data Area 4 for Telemetry Log: Not Supported 00:24:38.331 Error Log Page Entries Supported: 128 00:24:38.331 Keep Alive: Supported 00:24:38.331 Keep Alive Granularity: 10000 ms 00:24:38.331 00:24:38.331 NVM Command Set Attributes 00:24:38.331 ========================== 00:24:38.331 Submission Queue Entry Size 00:24:38.331 Max: 64 00:24:38.331 Min: 64 00:24:38.331 Completion Queue Entry Size 00:24:38.331 Max: 16 00:24:38.331 Min: 16 00:24:38.331 Number of Namespaces: 32 00:24:38.331 Compare Command: Supported 00:24:38.331 Write Uncorrectable Command: Not Supported 00:24:38.331 Dataset Management Command: Supported 00:24:38.331 Write Zeroes Command: Supported 00:24:38.331 Set Features Save Field: Not Supported 00:24:38.331 Reservations: Supported 00:24:38.331 Timestamp: Not Supported 00:24:38.331 Copy: Supported 00:24:38.331 Volatile Write Cache: Present 00:24:38.331 Atomic Write Unit (Normal): 1 00:24:38.331 Atomic Write Unit (PFail): 1 00:24:38.331 Atomic Compare & Write Unit: 1 00:24:38.331 Fused Compare & Write: Supported 00:24:38.331 Scatter-Gather List 00:24:38.331 SGL Command Set: Supported 00:24:38.331 SGL Keyed: Supported 00:24:38.331 SGL Bit Bucket Descriptor: Not Supported 00:24:38.331 SGL Metadata Pointer: Not Supported 00:24:38.331 Oversized SGL: Not Supported 00:24:38.331 SGL Metadata Address: Not Supported 00:24:38.331 SGL Offset: Supported 00:24:38.331 Transport SGL Data Block: Not Supported 00:24:38.331 Replay Protected Memory Block: Not Supported 00:24:38.331 00:24:38.331 Firmware Slot Information 00:24:38.331 ========================= 00:24:38.331 Active slot: 1 00:24:38.331 Slot 1 Firmware Revision: 24.01.1 00:24:38.331 00:24:38.331 00:24:38.331 Commands Supported and Effects 00:24:38.331 ============================== 00:24:38.331 Admin Commands 00:24:38.331 -------------- 00:24:38.331 Get Log Page (02h): Supported 00:24:38.331 Identify (06h): Supported 00:24:38.331 Abort (08h): Supported 00:24:38.331 Set Features (09h): Supported 00:24:38.331 Get Features (0Ah): Supported 00:24:38.331 Asynchronous Event Request (0Ch): Supported 00:24:38.331 Keep Alive (18h): Supported 00:24:38.331 I/O Commands 00:24:38.331 ------------ 00:24:38.331 Flush (00h): Supported LBA-Change 00:24:38.331 Write (01h): Supported LBA-Change 00:24:38.331 Read (02h): Supported 00:24:38.331 Compare (05h): Supported 00:24:38.331 Write Zeroes (08h): Supported LBA-Change 00:24:38.331 Dataset Management (09h): Supported LBA-Change 00:24:38.331 Copy (19h): Supported LBA-Change 00:24:38.331 Unknown (79h): Supported LBA-Change 00:24:38.331 Unknown (7Ah): Supported 00:24:38.331 00:24:38.331 Error Log 00:24:38.331 ========= 00:24:38.331 00:24:38.331 Arbitration 00:24:38.331 =========== 00:24:38.331 Arbitration Burst: 1 00:24:38.331 00:24:38.331 Power Management 00:24:38.331 ================ 00:24:38.331 Number of Power States: 1 00:24:38.331 Current Power State: Power State #0 00:24:38.331 Power State #0: 00:24:38.331 Max Power: 0.00 W 00:24:38.331 Non-Operational State: Operational 00:24:38.331 Entry Latency: Not Reported 00:24:38.331 Exit Latency: Not Reported 00:24:38.331 Relative Read Throughput: 0 00:24:38.331 Relative Read Latency: 0 00:24:38.331 Relative Write Throughput: 0 00:24:38.331 Relative Write Latency: 0 00:24:38.331 Idle Power: Not Reported 00:24:38.331 Active Power: Not Reported 00:24:38.331 Non-Operational Permissive Mode: Not Supported 00:24:38.331 00:24:38.331 Health Information 00:24:38.331 ================== 00:24:38.331 Critical Warnings: 00:24:38.331 Available Spare Space: OK 00:24:38.331 Temperature: OK 00:24:38.331 Device Reliability: OK 00:24:38.331 Read Only: No 00:24:38.331 Volatile Memory Backup: OK 00:24:38.331 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:38.331 Temperature Threshold: [2024-11-20 11:52:11.185342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185346] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185348] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a8d0, cid 7, qid 0 00:24:38.332 [2024-11-20 11:52:11.185415] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a8d0) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.185445] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:38.332 [2024-11-20 11:52:11.185453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.332 [2024-11-20 11:52:11.185457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.332 [2024-11-20 11:52:11.185461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.332 [2024-11-20 11:52:11.185465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.332 [2024-11-20 11:52:11.185471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.185543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.185557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.185643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.185664] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:38.332 [2024-11-20 11:52:11.185667] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:38.332 [2024-11-20 11:52:11.185672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185677] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.185734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.185749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.185815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.185829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.185901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.185916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.185925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.185935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.185989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.185993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.185995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.185998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.186004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.186006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.186008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.186013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.186023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.186076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.186081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.186083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.186085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.332 [2024-11-20 11:52:11.186091] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.186094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.332 [2024-11-20 11:52:11.186096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.332 [2024-11-20 11:52:11.186100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.332 [2024-11-20 11:52:11.186110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.332 [2024-11-20 11:52:11.186158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.332 [2024-11-20 11:52:11.186162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.332 [2024-11-20 11:52:11.186164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.186173] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.186182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.186192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.186246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.186250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.186252] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.186261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186263] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.186270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.186280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.186327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.186331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.186333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186335] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.186342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186346] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.186351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.186360] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.186410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.186414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.186416] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186418] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.186424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.186433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.186443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.186495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.186499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.186501] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186503] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.186510] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186514] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.186520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.186530] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.186584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.186589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.186590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186593] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.186599] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.186604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.186609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.186619] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.190664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.190675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.190678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.190680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.190687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.190690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.190692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1adbd30) 00:24:38.333 [2024-11-20 11:52:11.190697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.333 [2024-11-20 11:52:11.190711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b3a350, cid 3, qid 0 00:24:38.333 [2024-11-20 11:52:11.190758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.333 [2024-11-20 11:52:11.190762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.333 [2024-11-20 11:52:11.190764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.333 [2024-11-20 11:52:11.190767] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b3a350) on tqpair=0x1adbd30 00:24:38.333 [2024-11-20 11:52:11.190772] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:38.333 0 Kelvin (-273 Celsius) 00:24:38.333 Available Spare: 0% 00:24:38.333 Available Spare Threshold: 0% 00:24:38.333 Life Percentage Used: 0% 00:24:38.333 Data Units Read: 0 00:24:38.333 Data Units Written: 0 00:24:38.333 Host Read Commands: 0 00:24:38.333 Host Write Commands: 0 00:24:38.333 Controller Busy Time: 0 minutes 00:24:38.333 Power Cycles: 0 00:24:38.333 Power On Hours: 0 hours 00:24:38.333 Unsafe Shutdowns: 0 00:24:38.333 Unrecoverable Media Errors: 0 00:24:38.333 Lifetime Error Log Entries: 0 00:24:38.333 Warning Temperature Time: 0 minutes 00:24:38.333 Critical Temperature Time: 0 minutes 00:24:38.333 00:24:38.333 Number of Queues 00:24:38.333 ================ 00:24:38.333 Number of I/O Submission Queues: 127 00:24:38.333 Number of I/O Completion Queues: 127 00:24:38.333 00:24:38.333 Active Namespaces 00:24:38.333 ================= 00:24:38.333 Namespace ID:1 00:24:38.333 Error Recovery Timeout: Unlimited 00:24:38.333 Command Set Identifier: NVM (00h) 00:24:38.333 Deallocate: Supported 00:24:38.333 Deallocated/Unwritten Error: Not Supported 00:24:38.333 Deallocated Read Value: Unknown 00:24:38.333 Deallocate in Write Zeroes: Not Supported 00:24:38.333 Deallocated Guard Field: 0xFFFF 00:24:38.333 Flush: Supported 00:24:38.333 Reservation: Supported 00:24:38.333 Namespace Sharing Capabilities: Multiple Controllers 00:24:38.333 Size (in LBAs): 131072 (0GiB) 00:24:38.333 Capacity (in LBAs): 131072 (0GiB) 00:24:38.333 Utilization (in LBAs): 131072 (0GiB) 00:24:38.333 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:38.333 EUI64: ABCDEF0123456789 00:24:38.333 UUID: 8798e372-67f2-4230-94b4-d94b70f937f6 00:24:38.333 Thin Provisioning: Not Supported 00:24:38.333 Per-NS Atomic Units: Yes 00:24:38.333 Atomic Boundary Size (Normal): 0 00:24:38.333 Atomic Boundary Size (PFail): 0 00:24:38.333 Atomic Boundary Offset: 0 00:24:38.333 Maximum Single Source Range Length: 65535 00:24:38.333 Maximum Copy Length: 65535 00:24:38.333 Maximum Source Range Count: 1 00:24:38.333 NGUID/EUI64 Never Reused: No 00:24:38.333 Namespace Write Protected: No 00:24:38.333 Number of LBA Formats: 1 00:24:38.333 Current LBA Format: LBA Format #00 00:24:38.333 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:38.333 00:24:38.333 11:52:11 -- host/identify.sh@51 -- # sync 00:24:38.333 11:52:11 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.333 11:52:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.333 11:52:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.333 11:52:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.333 11:52:11 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:38.333 11:52:11 -- host/identify.sh@56 -- # nvmftestfini 00:24:38.333 11:52:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:38.333 11:52:11 -- nvmf/common.sh@116 -- # sync 00:24:38.333 11:52:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:38.333 11:52:11 -- nvmf/common.sh@119 -- # set +e 00:24:38.333 11:52:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:38.333 11:52:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:38.333 rmmod nvme_tcp 00:24:38.333 rmmod nvme_fabrics 00:24:38.333 rmmod nvme_keyring 00:24:38.333 11:52:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:38.333 11:52:11 -- nvmf/common.sh@123 -- # set -e 00:24:38.333 11:52:11 -- nvmf/common.sh@124 -- # return 0 00:24:38.333 11:52:11 -- nvmf/common.sh@477 -- # '[' -n 83002 ']' 00:24:38.334 11:52:11 -- nvmf/common.sh@478 -- # killprocess 83002 00:24:38.334 11:52:11 -- common/autotest_common.sh@936 -- # '[' -z 83002 ']' 00:24:38.334 11:52:11 -- common/autotest_common.sh@940 -- # kill -0 83002 00:24:38.334 11:52:11 -- common/autotest_common.sh@941 -- # uname 00:24:38.334 11:52:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.334 11:52:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83002 00:24:38.594 11:52:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:38.594 11:52:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:38.594 killing process with pid 83002 00:24:38.594 11:52:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83002' 00:24:38.594 11:52:11 -- common/autotest_common.sh@955 -- # kill 83002 00:24:38.594 [2024-11-20 11:52:11.385074] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:38.594 11:52:11 -- common/autotest_common.sh@960 -- # wait 83002 00:24:38.594 11:52:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:38.594 11:52:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:38.594 11:52:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:38.594 11:52:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.594 11:52:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:38.594 11:52:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.594 11:52:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.594 11:52:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.853 11:52:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:38.853 00:24:38.853 real 0m2.636s 00:24:38.853 user 0m6.886s 00:24:38.853 sys 0m0.729s 00:24:38.853 11:52:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:38.853 11:52:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.853 ************************************ 00:24:38.853 END TEST nvmf_identify 00:24:38.853 ************************************ 00:24:38.853 11:52:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:38.853 11:52:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:38.853 11:52:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:38.853 11:52:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.853 ************************************ 00:24:38.853 START TEST nvmf_perf 00:24:38.853 ************************************ 00:24:38.853 11:52:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:38.853 * Looking for test storage... 00:24:38.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:38.853 11:52:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:38.853 11:52:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:38.853 11:52:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:39.112 11:52:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:39.112 11:52:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:39.112 11:52:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:39.112 11:52:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:39.112 11:52:11 -- scripts/common.sh@335 -- # IFS=.-: 00:24:39.112 11:52:11 -- scripts/common.sh@335 -- # read -ra ver1 00:24:39.112 11:52:11 -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.112 11:52:11 -- scripts/common.sh@336 -- # read -ra ver2 00:24:39.112 11:52:11 -- scripts/common.sh@337 -- # local 'op=<' 00:24:39.112 11:52:11 -- scripts/common.sh@339 -- # ver1_l=2 00:24:39.112 11:52:11 -- scripts/common.sh@340 -- # ver2_l=1 00:24:39.112 11:52:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:39.112 11:52:11 -- scripts/common.sh@343 -- # case "$op" in 00:24:39.112 11:52:11 -- scripts/common.sh@344 -- # : 1 00:24:39.112 11:52:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:39.112 11:52:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.113 11:52:11 -- scripts/common.sh@364 -- # decimal 1 00:24:39.113 11:52:11 -- scripts/common.sh@352 -- # local d=1 00:24:39.113 11:52:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.113 11:52:11 -- scripts/common.sh@354 -- # echo 1 00:24:39.113 11:52:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:39.113 11:52:11 -- scripts/common.sh@365 -- # decimal 2 00:24:39.113 11:52:11 -- scripts/common.sh@352 -- # local d=2 00:24:39.113 11:52:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.113 11:52:11 -- scripts/common.sh@354 -- # echo 2 00:24:39.113 11:52:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:39.113 11:52:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:39.113 11:52:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:39.113 11:52:11 -- scripts/common.sh@367 -- # return 0 00:24:39.113 11:52:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.113 11:52:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.113 --rc genhtml_branch_coverage=1 00:24:39.113 --rc genhtml_function_coverage=1 00:24:39.113 --rc genhtml_legend=1 00:24:39.113 --rc geninfo_all_blocks=1 00:24:39.113 --rc geninfo_unexecuted_blocks=1 00:24:39.113 00:24:39.113 ' 00:24:39.113 11:52:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.113 --rc genhtml_branch_coverage=1 00:24:39.113 --rc genhtml_function_coverage=1 00:24:39.113 --rc genhtml_legend=1 00:24:39.113 --rc geninfo_all_blocks=1 00:24:39.113 --rc geninfo_unexecuted_blocks=1 00:24:39.113 00:24:39.113 ' 00:24:39.113 11:52:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.113 --rc genhtml_branch_coverage=1 00:24:39.113 --rc genhtml_function_coverage=1 00:24:39.113 --rc genhtml_legend=1 00:24:39.113 --rc geninfo_all_blocks=1 00:24:39.113 --rc geninfo_unexecuted_blocks=1 00:24:39.113 00:24:39.113 ' 00:24:39.113 11:52:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.113 --rc genhtml_branch_coverage=1 00:24:39.113 --rc genhtml_function_coverage=1 00:24:39.113 --rc genhtml_legend=1 00:24:39.113 --rc geninfo_all_blocks=1 00:24:39.113 --rc geninfo_unexecuted_blocks=1 00:24:39.113 00:24:39.113 ' 00:24:39.113 11:52:11 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:39.113 11:52:11 -- nvmf/common.sh@7 -- # uname -s 00:24:39.113 11:52:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.113 11:52:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.113 11:52:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.113 11:52:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.113 11:52:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.113 11:52:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.113 11:52:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.113 11:52:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.113 11:52:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.113 11:52:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.113 11:52:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:24:39.113 11:52:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:24:39.113 11:52:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.113 11:52:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.113 11:52:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:39.113 11:52:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.113 11:52:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.113 11:52:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.113 11:52:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.113 11:52:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.113 11:52:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.113 11:52:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.113 11:52:11 -- paths/export.sh@5 -- # export PATH 00:24:39.113 11:52:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.113 11:52:11 -- nvmf/common.sh@46 -- # : 0 00:24:39.113 11:52:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:39.113 11:52:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:39.113 11:52:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:39.113 11:52:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.113 11:52:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.113 11:52:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:39.113 11:52:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:39.113 11:52:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:39.113 11:52:11 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:39.113 11:52:11 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:39.113 11:52:11 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.113 11:52:11 -- host/perf.sh@17 -- # nvmftestinit 00:24:39.113 11:52:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:39.113 11:52:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.113 11:52:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:39.113 11:52:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:39.113 11:52:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:39.113 11:52:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.113 11:52:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.113 11:52:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.113 11:52:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:39.113 11:52:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:39.113 11:52:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:39.113 11:52:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:39.113 11:52:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:39.113 11:52:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:39.113 11:52:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.113 11:52:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.113 11:52:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:39.113 11:52:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:39.113 11:52:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:39.113 11:52:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:39.113 11:52:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:39.113 11:52:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.113 11:52:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:39.113 11:52:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:39.113 11:52:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:39.113 11:52:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:39.113 11:52:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:39.113 11:52:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:39.113 Cannot find device "nvmf_tgt_br" 00:24:39.113 11:52:12 -- nvmf/common.sh@154 -- # true 00:24:39.113 11:52:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:39.113 Cannot find device "nvmf_tgt_br2" 00:24:39.113 11:52:12 -- nvmf/common.sh@155 -- # true 00:24:39.113 11:52:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:39.113 11:52:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:39.113 Cannot find device "nvmf_tgt_br" 00:24:39.113 11:52:12 -- nvmf/common.sh@157 -- # true 00:24:39.113 11:52:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:39.113 Cannot find device "nvmf_tgt_br2" 00:24:39.113 11:52:12 -- nvmf/common.sh@158 -- # true 00:24:39.113 11:52:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:39.113 11:52:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:39.374 11:52:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.374 11:52:12 -- nvmf/common.sh@161 -- # true 00:24:39.374 11:52:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.374 11:52:12 -- nvmf/common.sh@162 -- # true 00:24:39.374 11:52:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.374 11:52:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.374 11:52:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.374 11:52:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.374 11:52:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.374 11:52:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.374 11:52:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.374 11:52:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:39.374 11:52:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:39.374 11:52:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:39.374 11:52:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:39.374 11:52:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:39.374 11:52:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:39.374 11:52:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.374 11:52:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.374 11:52:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.374 11:52:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:39.374 11:52:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:39.374 11:52:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.374 11:52:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.374 11:52:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.374 11:52:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.374 11:52:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.374 11:52:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:39.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:24:39.374 00:24:39.374 --- 10.0.0.2 ping statistics --- 00:24:39.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.374 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:39.374 11:52:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:39.374 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.374 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:24:39.374 00:24:39.374 --- 10.0.0.3 ping statistics --- 00:24:39.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.374 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:39.374 11:52:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:39.374 00:24:39.374 --- 10.0.0.1 ping statistics --- 00:24:39.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.374 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:39.374 11:52:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.374 11:52:12 -- nvmf/common.sh@421 -- # return 0 00:24:39.374 11:52:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:39.374 11:52:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.374 11:52:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:39.374 11:52:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:39.374 11:52:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.374 11:52:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:39.374 11:52:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:39.374 11:52:12 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:39.374 11:52:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:39.374 11:52:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:39.374 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.374 11:52:12 -- nvmf/common.sh@469 -- # nvmfpid=83232 00:24:39.374 11:52:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.374 11:52:12 -- nvmf/common.sh@470 -- # waitforlisten 83232 00:24:39.374 11:52:12 -- common/autotest_common.sh@829 -- # '[' -z 83232 ']' 00:24:39.374 11:52:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.374 11:52:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.374 11:52:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.374 11:52:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.374 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.635 [2024-11-20 11:52:12.418678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:39.635 [2024-11-20 11:52:12.418743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.635 [2024-11-20 11:52:12.550831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.635 [2024-11-20 11:52:12.629632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:39.635 [2024-11-20 11:52:12.629771] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.635 [2024-11-20 11:52:12.629778] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.635 [2024-11-20 11:52:12.629783] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.635 [2024-11-20 11:52:12.630596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.635 [2024-11-20 11:52:12.630745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.635 [2024-11-20 11:52:12.630847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.635 [2024-11-20 11:52:12.630852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.205 11:52:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.205 11:52:13 -- common/autotest_common.sh@862 -- # return 0 00:24:40.205 11:52:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:40.205 11:52:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:40.205 11:52:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.465 11:52:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.465 11:52:13 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:40.465 11:52:13 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:24:40.724 11:52:13 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:24:40.724 11:52:13 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:40.984 11:52:13 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:24:40.984 11:52:13 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:41.244 11:52:14 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:41.244 11:52:14 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:24:41.244 11:52:14 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:41.244 11:52:14 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:41.244 11:52:14 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.244 [2024-11-20 11:52:14.221436] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.244 11:52:14 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.504 11:52:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:41.504 11:52:14 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.764 11:52:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:41.764 11:52:14 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:41.764 11:52:14 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.023 [2024-11-20 11:52:14.945028] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.023 11:52:14 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:42.284 11:52:15 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:24:42.284 11:52:15 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:24:42.284 11:52:15 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:42.284 11:52:15 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:24:43.223 Initializing NVMe Controllers 00:24:43.223 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:24:43.223 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:24:43.223 Initialization complete. Launching workers. 00:24:43.223 ======================================================== 00:24:43.223 Latency(us) 00:24:43.223 Device Information : IOPS MiB/s Average min max 00:24:43.223 PCIE (0000:00:06.0) NSID 1 from core 0: 19942.00 77.90 1604.44 244.29 7673.60 00:24:43.223 ======================================================== 00:24:43.223 Total : 19942.00 77.90 1604.44 244.29 7673.60 00:24:43.223 00:24:43.223 11:52:16 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.600 Initializing NVMe Controllers 00:24:44.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:44.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:44.600 Initialization complete. Launching workers. 00:24:44.600 ======================================================== 00:24:44.600 Latency(us) 00:24:44.600 Device Information : IOPS MiB/s Average min max 00:24:44.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5561.73 21.73 178.88 68.96 5134.49 00:24:44.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.88 0.48 8136.22 5037.02 12039.50 00:24:44.600 ======================================================== 00:24:44.601 Total : 5685.61 22.21 352.26 68.96 12039.50 00:24:44.601 00:24:44.601 11:52:17 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.980 Initializing NVMe Controllers 00:24:45.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:45.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:45.980 Initialization complete. Launching workers. 00:24:45.980 ======================================================== 00:24:45.980 Latency(us) 00:24:45.980 Device Information : IOPS MiB/s Average min max 00:24:45.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12498.41 48.82 2560.86 487.12 6179.36 00:24:45.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2698.87 10.54 11962.43 7448.03 20171.94 00:24:45.980 ======================================================== 00:24:45.980 Total : 15197.29 59.36 4230.47 487.12 20171.94 00:24:45.980 00:24:45.980 11:52:18 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:45.980 11:52:18 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.551 Initializing NVMe Controllers 00:24:48.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.551 Controller IO queue size 128, less than required. 00:24:48.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.551 Controller IO queue size 128, less than required. 00:24:48.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:48.551 Initialization complete. Launching workers. 00:24:48.551 ======================================================== 00:24:48.551 Latency(us) 00:24:48.551 Device Information : IOPS MiB/s Average min max 00:24:48.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2220.80 555.20 58417.78 37815.77 97511.00 00:24:48.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.04 151.76 218506.65 83149.62 334715.02 00:24:48.551 ======================================================== 00:24:48.551 Total : 2827.84 706.96 92783.15 37815.77 334715.02 00:24:48.551 00:24:48.551 11:52:21 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:48.551 No valid NVMe controllers or AIO or URING devices found 00:24:48.810 Initializing NVMe Controllers 00:24:48.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.810 Controller IO queue size 128, less than required. 00:24:48.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.810 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:48.810 Controller IO queue size 128, less than required. 00:24:48.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.810 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:48.810 WARNING: Some requested NVMe devices were skipped 00:24:48.810 11:52:21 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:51.347 Initializing NVMe Controllers 00:24:51.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.347 Controller IO queue size 128, less than required. 00:24:51.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.347 Controller IO queue size 128, less than required. 00:24:51.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:51.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:51.347 Initialization complete. Launching workers. 00:24:51.347 00:24:51.347 ==================== 00:24:51.347 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:51.347 TCP transport: 00:24:51.347 polls: 15333 00:24:51.347 idle_polls: 11766 00:24:51.347 sock_completions: 3567 00:24:51.347 nvme_completions: 7103 00:24:51.347 submitted_requests: 10817 00:24:51.347 queued_requests: 1 00:24:51.347 00:24:51.347 ==================== 00:24:51.347 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:51.347 TCP transport: 00:24:51.347 polls: 15335 00:24:51.347 idle_polls: 11920 00:24:51.347 sock_completions: 3415 00:24:51.347 nvme_completions: 6731 00:24:51.347 submitted_requests: 10237 00:24:51.347 queued_requests: 1 00:24:51.347 ======================================================== 00:24:51.347 Latency(us) 00:24:51.347 Device Information : IOPS MiB/s Average min max 00:24:51.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1838.73 459.68 70789.27 45305.62 113136.15 00:24:51.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1745.27 436.32 74068.50 38592.03 124324.08 00:24:51.347 ======================================================== 00:24:51.347 Total : 3583.99 896.00 72386.13 38592.03 124324.08 00:24:51.347 00:24:51.347 11:52:24 -- host/perf.sh@66 -- # sync 00:24:51.347 11:52:24 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.347 11:52:24 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:51.347 11:52:24 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:24:51.347 11:52:24 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:51.607 11:52:24 -- host/perf.sh@72 -- # ls_guid=9f5e0fed-ad99-42b3-a89b-cea26cb91646 00:24:51.607 11:52:24 -- host/perf.sh@73 -- # get_lvs_free_mb 9f5e0fed-ad99-42b3-a89b-cea26cb91646 00:24:51.607 11:52:24 -- common/autotest_common.sh@1353 -- # local lvs_uuid=9f5e0fed-ad99-42b3-a89b-cea26cb91646 00:24:51.607 11:52:24 -- common/autotest_common.sh@1354 -- # local lvs_info 00:24:51.607 11:52:24 -- common/autotest_common.sh@1355 -- # local fc 00:24:51.607 11:52:24 -- common/autotest_common.sh@1356 -- # local cs 00:24:51.607 11:52:24 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:51.866 11:52:24 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:24:51.866 { 00:24:51.866 "base_bdev": "Nvme0n1", 00:24:51.866 "block_size": 4096, 00:24:51.866 "cluster_size": 4194304, 00:24:51.866 "free_clusters": 1278, 00:24:51.866 "name": "lvs_0", 00:24:51.866 "total_data_clusters": 1278, 00:24:51.866 "uuid": "9f5e0fed-ad99-42b3-a89b-cea26cb91646" 00:24:51.866 } 00:24:51.866 ]' 00:24:51.866 11:52:24 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="9f5e0fed-ad99-42b3-a89b-cea26cb91646") .free_clusters' 00:24:51.866 11:52:24 -- common/autotest_common.sh@1358 -- # fc=1278 00:24:51.866 11:52:24 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="9f5e0fed-ad99-42b3-a89b-cea26cb91646") .cluster_size' 00:24:51.866 11:52:24 -- common/autotest_common.sh@1359 -- # cs=4194304 00:24:51.866 11:52:24 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:24:51.866 5112 00:24:51.866 11:52:24 -- common/autotest_common.sh@1363 -- # echo 5112 00:24:51.866 11:52:24 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:24:51.866 11:52:24 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f5e0fed-ad99-42b3-a89b-cea26cb91646 lbd_0 5112 00:24:52.126 11:52:25 -- host/perf.sh@80 -- # lb_guid=a35bacb4-d9d8-4c31-8a29-2c21af7b09de 00:24:52.126 11:52:25 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore a35bacb4-d9d8-4c31-8a29-2c21af7b09de lvs_n_0 00:24:52.385 11:52:25 -- host/perf.sh@83 -- # ls_nested_guid=7324986b-dee7-4642-9f62-b22b0aa3b4a4 00:24:52.385 11:52:25 -- host/perf.sh@84 -- # get_lvs_free_mb 7324986b-dee7-4642-9f62-b22b0aa3b4a4 00:24:52.385 11:52:25 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7324986b-dee7-4642-9f62-b22b0aa3b4a4 00:24:52.385 11:52:25 -- common/autotest_common.sh@1354 -- # local lvs_info 00:24:52.385 11:52:25 -- common/autotest_common.sh@1355 -- # local fc 00:24:52.385 11:52:25 -- common/autotest_common.sh@1356 -- # local cs 00:24:52.385 11:52:25 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:52.644 11:52:25 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:24:52.644 { 00:24:52.644 "base_bdev": "Nvme0n1", 00:24:52.644 "block_size": 4096, 00:24:52.644 "cluster_size": 4194304, 00:24:52.644 "free_clusters": 0, 00:24:52.644 "name": "lvs_0", 00:24:52.644 "total_data_clusters": 1278, 00:24:52.644 "uuid": "9f5e0fed-ad99-42b3-a89b-cea26cb91646" 00:24:52.644 }, 00:24:52.644 { 00:24:52.644 "base_bdev": "a35bacb4-d9d8-4c31-8a29-2c21af7b09de", 00:24:52.644 "block_size": 4096, 00:24:52.644 "cluster_size": 4194304, 00:24:52.644 "free_clusters": 1276, 00:24:52.644 "name": "lvs_n_0", 00:24:52.644 "total_data_clusters": 1276, 00:24:52.644 "uuid": "7324986b-dee7-4642-9f62-b22b0aa3b4a4" 00:24:52.644 } 00:24:52.644 ]' 00:24:52.644 11:52:25 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7324986b-dee7-4642-9f62-b22b0aa3b4a4") .free_clusters' 00:24:52.644 11:52:25 -- common/autotest_common.sh@1358 -- # fc=1276 00:24:52.644 11:52:25 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7324986b-dee7-4642-9f62-b22b0aa3b4a4") .cluster_size' 00:24:52.644 11:52:25 -- common/autotest_common.sh@1359 -- # cs=4194304 00:24:52.644 11:52:25 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:24:52.644 5104 00:24:52.644 11:52:25 -- common/autotest_common.sh@1363 -- # echo 5104 00:24:52.644 11:52:25 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:24:52.644 11:52:25 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7324986b-dee7-4642-9f62-b22b0aa3b4a4 lbd_nest_0 5104 00:24:52.904 11:52:25 -- host/perf.sh@88 -- # lb_nested_guid=ebad499a-679e-4b58-97ca-895d0c69dbf6 00:24:52.904 11:52:25 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.164 11:52:25 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:53.164 11:52:25 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ebad499a-679e-4b58-97ca-895d0c69dbf6 00:24:53.164 11:52:26 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.424 11:52:26 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:53.424 11:52:26 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:53.424 11:52:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:53.424 11:52:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:53.424 11:52:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:53.684 No valid NVMe controllers or AIO or URING devices found 00:24:53.684 Initializing NVMe Controllers 00:24:53.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.684 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:53.684 WARNING: Some requested NVMe devices were skipped 00:24:53.684 11:52:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:53.684 11:52:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.907 Initializing NVMe Controllers 00:25:05.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.907 Initialization complete. Launching workers. 00:25:05.907 ======================================================== 00:25:05.907 Latency(us) 00:25:05.907 Device Information : IOPS MiB/s Average min max 00:25:05.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1027.37 128.42 973.13 270.65 7778.78 00:25:05.907 ======================================================== 00:25:05.907 Total : 1027.37 128.42 973.13 270.65 7778.78 00:25:05.907 00:25:05.907 11:52:36 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:05.907 11:52:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:05.907 11:52:36 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.907 No valid NVMe controllers or AIO or URING devices found 00:25:05.907 Initializing NVMe Controllers 00:25:05.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.907 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:25:05.907 WARNING: Some requested NVMe devices were skipped 00:25:05.907 11:52:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:05.907 11:52:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.897 [2024-11-20 11:52:47.372681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b900 is same with the state(5) to be set 00:25:15.897 [2024-11-20 11:52:47.372735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b900 is same with the state(5) to be set 00:25:15.897 [2024-11-20 11:52:47.372742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b900 is same with the state(5) to be set 00:25:15.897 Initializing NVMe Controllers 00:25:15.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:15.897 Initialization complete. Launching workers. 00:25:15.897 ======================================================== 00:25:15.897 Latency(us) 00:25:15.897 Device Information : IOPS MiB/s Average min max 00:25:15.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1100.60 137.57 29102.74 8115.86 245220.92 00:25:15.897 ======================================================== 00:25:15.897 Total : 1100.60 137.57 29102.74 8115.86 245220.92 00:25:15.897 00:25:15.897 11:52:47 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:15.897 11:52:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:15.897 11:52:47 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.897 No valid NVMe controllers or AIO or URING devices found 00:25:15.897 Initializing NVMe Controllers 00:25:15.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.897 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:25:15.897 WARNING: Some requested NVMe devices were skipped 00:25:15.897 11:52:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:15.897 11:52:47 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.882 Initializing NVMe Controllers 00:25:25.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.882 Controller IO queue size 128, less than required. 00:25:25.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:25.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.882 Initialization complete. Launching workers. 00:25:25.882 ======================================================== 00:25:25.882 Latency(us) 00:25:25.882 Device Information : IOPS MiB/s Average min max 00:25:25.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5272.25 659.03 24301.27 7799.40 58968.61 00:25:25.882 ======================================================== 00:25:25.882 Total : 5272.25 659.03 24301.27 7799.40 58968.61 00:25:25.882 00:25:25.882 11:52:58 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.882 11:52:58 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ebad499a-679e-4b58-97ca-895d0c69dbf6 00:25:25.882 11:52:58 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:25.883 11:52:58 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a35bacb4-d9d8-4c31-8a29-2c21af7b09de 00:25:25.883 11:52:58 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:26.142 11:52:59 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:26.142 11:52:59 -- host/perf.sh@114 -- # nvmftestfini 00:25:26.142 11:52:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:26.142 11:52:59 -- nvmf/common.sh@116 -- # sync 00:25:26.142 11:52:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:26.142 11:52:59 -- nvmf/common.sh@119 -- # set +e 00:25:26.142 11:52:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:26.142 11:52:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:26.142 rmmod nvme_tcp 00:25:26.142 rmmod nvme_fabrics 00:25:26.142 rmmod nvme_keyring 00:25:26.142 11:52:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:26.142 11:52:59 -- nvmf/common.sh@123 -- # set -e 00:25:26.143 11:52:59 -- nvmf/common.sh@124 -- # return 0 00:25:26.143 11:52:59 -- nvmf/common.sh@477 -- # '[' -n 83232 ']' 00:25:26.143 11:52:59 -- nvmf/common.sh@478 -- # killprocess 83232 00:25:26.143 11:52:59 -- common/autotest_common.sh@936 -- # '[' -z 83232 ']' 00:25:26.143 11:52:59 -- common/autotest_common.sh@940 -- # kill -0 83232 00:25:26.143 11:52:59 -- common/autotest_common.sh@941 -- # uname 00:25:26.402 11:52:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:26.402 11:52:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83232 00:25:26.402 killing process with pid 83232 00:25:26.402 11:52:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:26.402 11:52:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:26.402 11:52:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83232' 00:25:26.402 11:52:59 -- common/autotest_common.sh@955 -- # kill 83232 00:25:26.402 11:52:59 -- common/autotest_common.sh@960 -- # wait 83232 00:25:29.710 11:53:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:29.710 11:53:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:29.710 11:53:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:29.710 11:53:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.710 11:53:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:29.710 11:53:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.710 11:53:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.710 11:53:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.710 11:53:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:29.710 00:25:29.710 real 0m50.372s 00:25:29.710 user 3m9.969s 00:25:29.710 sys 0m9.937s 00:25:29.710 11:53:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:29.710 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.710 ************************************ 00:25:29.710 END TEST nvmf_perf 00:25:29.710 ************************************ 00:25:29.710 11:53:02 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:29.710 11:53:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:29.710 11:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:29.710 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.710 ************************************ 00:25:29.710 START TEST nvmf_fio_host 00:25:29.710 ************************************ 00:25:29.710 11:53:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:29.710 * Looking for test storage... 00:25:29.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:29.710 11:53:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:29.710 11:53:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:29.710 11:53:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:29.710 11:53:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:29.710 11:53:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:29.710 11:53:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:29.710 11:53:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:29.710 11:53:02 -- scripts/common.sh@335 -- # IFS=.-: 00:25:29.710 11:53:02 -- scripts/common.sh@335 -- # read -ra ver1 00:25:29.710 11:53:02 -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.710 11:53:02 -- scripts/common.sh@336 -- # read -ra ver2 00:25:29.710 11:53:02 -- scripts/common.sh@337 -- # local 'op=<' 00:25:29.710 11:53:02 -- scripts/common.sh@339 -- # ver1_l=2 00:25:29.710 11:53:02 -- scripts/common.sh@340 -- # ver2_l=1 00:25:29.710 11:53:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:29.710 11:53:02 -- scripts/common.sh@343 -- # case "$op" in 00:25:29.710 11:53:02 -- scripts/common.sh@344 -- # : 1 00:25:29.710 11:53:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:29.710 11:53:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.710 11:53:02 -- scripts/common.sh@364 -- # decimal 1 00:25:29.710 11:53:02 -- scripts/common.sh@352 -- # local d=1 00:25:29.710 11:53:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.710 11:53:02 -- scripts/common.sh@354 -- # echo 1 00:25:29.710 11:53:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:29.710 11:53:02 -- scripts/common.sh@365 -- # decimal 2 00:25:29.710 11:53:02 -- scripts/common.sh@352 -- # local d=2 00:25:29.710 11:53:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.710 11:53:02 -- scripts/common.sh@354 -- # echo 2 00:25:29.710 11:53:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:29.710 11:53:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:29.710 11:53:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:29.710 11:53:02 -- scripts/common.sh@367 -- # return 0 00:25:29.710 11:53:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.710 11:53:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:29.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.710 --rc genhtml_branch_coverage=1 00:25:29.710 --rc genhtml_function_coverage=1 00:25:29.710 --rc genhtml_legend=1 00:25:29.710 --rc geninfo_all_blocks=1 00:25:29.710 --rc geninfo_unexecuted_blocks=1 00:25:29.710 00:25:29.710 ' 00:25:29.710 11:53:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:29.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.710 --rc genhtml_branch_coverage=1 00:25:29.710 --rc genhtml_function_coverage=1 00:25:29.710 --rc genhtml_legend=1 00:25:29.710 --rc geninfo_all_blocks=1 00:25:29.710 --rc geninfo_unexecuted_blocks=1 00:25:29.710 00:25:29.710 ' 00:25:29.710 11:53:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:29.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.710 --rc genhtml_branch_coverage=1 00:25:29.710 --rc genhtml_function_coverage=1 00:25:29.710 --rc genhtml_legend=1 00:25:29.710 --rc geninfo_all_blocks=1 00:25:29.710 --rc geninfo_unexecuted_blocks=1 00:25:29.710 00:25:29.710 ' 00:25:29.710 11:53:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:29.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.710 --rc genhtml_branch_coverage=1 00:25:29.710 --rc genhtml_function_coverage=1 00:25:29.710 --rc genhtml_legend=1 00:25:29.710 --rc geninfo_all_blocks=1 00:25:29.710 --rc geninfo_unexecuted_blocks=1 00:25:29.710 00:25:29.710 ' 00:25:29.710 11:53:02 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.710 11:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.710 11:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.710 11:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.710 11:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.710 11:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.710 11:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.710 11:53:02 -- paths/export.sh@5 -- # export PATH 00:25:29.710 11:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.710 11:53:02 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.711 11:53:02 -- nvmf/common.sh@7 -- # uname -s 00:25:29.711 11:53:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.711 11:53:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.711 11:53:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.711 11:53:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.711 11:53:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.711 11:53:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.711 11:53:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.711 11:53:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.711 11:53:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.711 11:53:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:25:29.711 11:53:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:25:29.711 11:53:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.711 11:53:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.711 11:53:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:29.711 11:53:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.711 11:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.711 11:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.711 11:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.711 11:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.711 11:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.711 11:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.711 11:53:02 -- paths/export.sh@5 -- # export PATH 00:25:29.711 11:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.711 11:53:02 -- nvmf/common.sh@46 -- # : 0 00:25:29.711 11:53:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:29.711 11:53:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:29.711 11:53:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:29.711 11:53:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.711 11:53:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.711 11:53:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:29.711 11:53:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:29.711 11:53:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:29.711 11:53:02 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.711 11:53:02 -- host/fio.sh@14 -- # nvmftestinit 00:25:29.711 11:53:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:29.711 11:53:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.711 11:53:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:29.711 11:53:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:29.711 11:53:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:29.711 11:53:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.711 11:53:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.711 11:53:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.711 11:53:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:29.711 11:53:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.711 11:53:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.711 11:53:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:29.711 11:53:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:29.711 11:53:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:29.711 11:53:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:29.711 11:53:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:29.711 11:53:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.711 11:53:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:29.711 11:53:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:29.711 11:53:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:29.711 11:53:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:29.711 11:53:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:29.711 11:53:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:29.711 Cannot find device "nvmf_tgt_br" 00:25:29.711 11:53:02 -- nvmf/common.sh@154 -- # true 00:25:29.711 11:53:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:29.711 Cannot find device "nvmf_tgt_br2" 00:25:29.711 11:53:02 -- nvmf/common.sh@155 -- # true 00:25:29.711 11:53:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:29.711 11:53:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:29.711 Cannot find device "nvmf_tgt_br" 00:25:29.711 11:53:02 -- nvmf/common.sh@157 -- # true 00:25:29.711 11:53:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:29.711 Cannot find device "nvmf_tgt_br2" 00:25:29.711 11:53:02 -- nvmf/common.sh@158 -- # true 00:25:29.711 11:53:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:29.711 11:53:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:29.711 11:53:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:29.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.711 11:53:02 -- nvmf/common.sh@161 -- # true 00:25:29.711 11:53:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:29.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.711 11:53:02 -- nvmf/common.sh@162 -- # true 00:25:29.711 11:53:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:29.711 11:53:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:29.711 11:53:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:29.711 11:53:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:29.711 11:53:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:29.711 11:53:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:29.711 11:53:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:29.711 11:53:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:29.711 11:53:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:29.711 11:53:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:29.711 11:53:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:29.711 11:53:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:29.711 11:53:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:29.711 11:53:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.711 11:53:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.711 11:53:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.711 11:53:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:29.711 11:53:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:29.711 11:53:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.711 11:53:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.711 11:53:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.711 11:53:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.711 11:53:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.711 11:53:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:29.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:25:29.711 00:25:29.711 --- 10.0.0.2 ping statistics --- 00:25:29.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.711 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:29.711 11:53:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:29.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:25:29.711 00:25:29.711 --- 10.0.0.3 ping statistics --- 00:25:29.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.711 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:29.711 11:53:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:25:29.711 00:25:29.711 --- 10.0.0.1 ping statistics --- 00:25:29.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.711 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:25:29.711 11:53:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.711 11:53:02 -- nvmf/common.sh@421 -- # return 0 00:25:29.711 11:53:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:29.711 11:53:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.711 11:53:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.711 11:53:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.712 11:53:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.712 11:53:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.712 11:53:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.712 11:53:02 -- host/fio.sh@16 -- # [[ y != y ]] 00:25:29.712 11:53:02 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:29.712 11:53:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.971 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.971 11:53:02 -- host/fio.sh@24 -- # nvmfpid=84209 00:25:29.971 11:53:02 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:29.971 11:53:02 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.971 11:53:02 -- host/fio.sh@28 -- # waitforlisten 84209 00:25:29.971 11:53:02 -- common/autotest_common.sh@829 -- # '[' -z 84209 ']' 00:25:29.971 11:53:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.971 11:53:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.971 11:53:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.971 11:53:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.971 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.971 [2024-11-20 11:53:02.806156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:29.971 [2024-11-20 11:53:02.806227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.971 [2024-11-20 11:53:02.944719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.231 [2024-11-20 11:53:03.031542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.231 [2024-11-20 11:53:03.031668] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.231 [2024-11-20 11:53:03.031676] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.231 [2024-11-20 11:53:03.031681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.231 [2024-11-20 11:53:03.031863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.231 [2024-11-20 11:53:03.032057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.231 [2024-11-20 11:53:03.032050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.231 [2024-11-20 11:53:03.031971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.800 11:53:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.800 11:53:03 -- common/autotest_common.sh@862 -- # return 0 00:25:30.800 11:53:03 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:30.800 [2024-11-20 11:53:03.818367] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.060 11:53:03 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:31.060 11:53:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.060 11:53:03 -- common/autotest_common.sh@10 -- # set +x 00:25:31.060 11:53:03 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:31.060 Malloc1 00:25:31.321 11:53:04 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.321 11:53:04 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:31.580 11:53:04 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.840 [2024-11-20 11:53:04.645581] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.840 11:53:04 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:31.840 11:53:04 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:31.840 11:53:04 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:31.840 11:53:04 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:31.840 11:53:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:31.840 11:53:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.840 11:53:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:31.840 11:53:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:31.840 11:53:04 -- common/autotest_common.sh@1330 -- # shift 00:25:31.840 11:53:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:31.840 11:53:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.840 11:53:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:31.840 11:53:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:31.840 11:53:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:32.100 11:53:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:32.100 11:53:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:32.100 11:53:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.100 11:53:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:32.100 11:53:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:32.100 11:53:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:32.100 11:53:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:32.100 11:53:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:32.100 11:53:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:32.100 11:53:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:32.100 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:32.100 fio-3.35 00:25:32.100 Starting 1 thread 00:25:34.640 00:25:34.640 test: (groupid=0, jobs=1): err= 0: pid=84336: Wed Nov 20 11:53:07 2024 00:25:34.640 read: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(104MiB/2005msec) 00:25:34.640 slat (nsec): min=1497, max=335658, avg=1665.63, stdev=2599.99 00:25:34.640 clat (usec): min=3130, max=8760, avg=5117.31, stdev=398.31 00:25:34.640 lat (usec): min=3163, max=8762, avg=5118.98, stdev=398.27 00:25:34.640 clat percentiles (usec): 00:25:34.640 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:25:34.640 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:34.640 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5800], 00:25:34.640 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7439], 99.95th=[ 7701], 00:25:34.640 | 99.99th=[ 8586] 00:25:34.640 bw ( KiB/s): min=52248, max=53400, per=100.00%, avg=52918.00, stdev=490.65, samples=4 00:25:34.640 iops : min=13062, max=13350, avg=13229.50, stdev=122.66, samples=4 00:25:34.640 write: IOPS=13.2k, BW=51.6MiB/s (54.2MB/s)(104MiB/2005msec); 0 zone resets 00:25:34.640 slat (nsec): min=1553, max=373473, avg=1717.76, stdev=2379.71 00:25:34.640 clat (usec): min=2530, max=8626, avg=4518.72, stdev=329.62 00:25:34.640 lat (usec): min=2543, max=8628, avg=4520.44, stdev=329.62 00:25:34.640 clat percentiles (usec): 00:25:34.640 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:25:34.640 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4621], 00:25:34.640 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5014], 00:25:34.640 | 99.00th=[ 5211], 99.50th=[ 5407], 99.90th=[ 6849], 99.95th=[ 7504], 00:25:34.640 | 99.99th=[ 8356] 00:25:34.640 bw ( KiB/s): min=52552, max=53288, per=100.00%, avg=52884.00, stdev=313.30, samples=4 00:25:34.640 iops : min=13138, max=13322, avg=13221.00, stdev=78.32, samples=4 00:25:34.640 lat (msec) : 4=2.39%, 10=97.61% 00:25:34.640 cpu : usr=73.40%, sys=19.86%, ctx=9, majf=0, minf=5 00:25:34.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:34.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:34.640 issued rwts: total=26520,26508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:34.640 00:25:34.640 Run status group 0 (all jobs): 00:25:34.640 READ: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=104MiB (109MB), run=2005-2005msec 00:25:34.640 WRITE: bw=51.6MiB/s (54.2MB/s), 51.6MiB/s-51.6MiB/s (54.2MB/s-54.2MB/s), io=104MiB (109MB), run=2005-2005msec 00:25:34.640 11:53:07 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:34.640 11:53:07 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:34.640 11:53:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:34.640 11:53:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:34.640 11:53:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:34.640 11:53:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:34.640 11:53:07 -- common/autotest_common.sh@1330 -- # shift 00:25:34.640 11:53:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:34.640 11:53:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:34.640 11:53:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:34.640 11:53:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:34.640 11:53:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:34.640 11:53:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:34.640 11:53:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:34.640 11:53:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:34.640 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:34.640 fio-3.35 00:25:34.640 Starting 1 thread 00:25:37.188 00:25:37.188 test: (groupid=0, jobs=1): err= 0: pid=84380: Wed Nov 20 11:53:09 2024 00:25:37.188 read: IOPS=11.9k, BW=185MiB/s (194MB/s)(372MiB/2005msec) 00:25:37.188 slat (nsec): min=2386, max=82762, avg=2704.75, stdev=1692.65 00:25:37.188 clat (usec): min=1811, max=17172, avg=6423.13, stdev=1618.51 00:25:37.188 lat (usec): min=1813, max=17195, avg=6425.83, stdev=1618.88 00:25:37.188 clat percentiles (usec): 00:25:37.188 | 1.00th=[ 3359], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 5014], 00:25:37.188 | 30.00th=[ 5473], 40.00th=[ 5932], 50.00th=[ 6325], 60.00th=[ 6783], 00:25:37.188 | 70.00th=[ 7308], 80.00th=[ 7767], 90.00th=[ 8160], 95.00th=[ 9110], 00:25:37.188 | 99.00th=[10945], 99.50th=[11600], 99.90th=[15533], 99.95th=[16581], 00:25:37.188 | 99.99th=[17171] 00:25:37.188 bw ( KiB/s): min=92448, max=94880, per=49.27%, avg=93560.00, stdev=1024.46, samples=4 00:25:37.188 iops : min= 5778, max= 5930, avg=5847.50, stdev=64.03, samples=4 00:25:37.188 write: IOPS=6802, BW=106MiB/s (111MB/s)(190MiB/1790msec); 0 zone resets 00:25:37.188 slat (usec): min=27, max=524, avg=29.36, stdev= 9.86 00:25:37.188 clat (usec): min=2225, max=18418, avg=7830.58, stdev=1514.88 00:25:37.188 lat (usec): min=2253, max=18561, avg=7859.94, stdev=1518.57 00:25:37.188 clat percentiles (usec): 00:25:37.188 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6194], 20.00th=[ 6587], 00:25:37.188 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8029], 00:25:37.188 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[10421], 00:25:37.188 | 99.00th=[12256], 99.50th=[13173], 99.90th=[17957], 99.95th=[17957], 00:25:37.188 | 99.99th=[18220] 00:25:37.188 bw ( KiB/s): min=95968, max=98656, per=89.50%, avg=97416.00, stdev=1121.18, samples=4 00:25:37.189 iops : min= 5998, max= 6166, avg=6088.50, stdev=70.07, samples=4 00:25:37.189 lat (msec) : 2=0.02%, 4=3.28%, 10=92.33%, 20=4.37% 00:25:37.189 cpu : usr=75.11%, sys=16.31%, ctx=24, majf=0, minf=1 00:25:37.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:37.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.189 issued rwts: total=23796,12177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.189 00:25:37.189 Run status group 0 (all jobs): 00:25:37.189 READ: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=372MiB (390MB), run=2005-2005msec 00:25:37.189 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=190MiB (200MB), run=1790-1790msec 00:25:37.189 11:53:09 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.189 11:53:10 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:25:37.189 11:53:10 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:25:37.189 11:53:10 -- host/fio.sh@51 -- # get_nvme_bdfs 00:25:37.189 11:53:10 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:37.189 11:53:10 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:37.189 11:53:10 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:37.189 11:53:10 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:37.189 11:53:10 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:37.189 11:53:10 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:37.189 11:53:10 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:37.189 11:53:10 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:25:37.463 Nvme0n1 00:25:37.463 11:53:10 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:25:37.723 11:53:10 -- host/fio.sh@53 -- # ls_guid=45225280-28d1-4d10-88a9-0bcaeb9e940a 00:25:37.723 11:53:10 -- host/fio.sh@54 -- # get_lvs_free_mb 45225280-28d1-4d10-88a9-0bcaeb9e940a 00:25:37.723 11:53:10 -- common/autotest_common.sh@1353 -- # local lvs_uuid=45225280-28d1-4d10-88a9-0bcaeb9e940a 00:25:37.723 11:53:10 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:37.723 11:53:10 -- common/autotest_common.sh@1355 -- # local fc 00:25:37.723 11:53:10 -- common/autotest_common.sh@1356 -- # local cs 00:25:37.723 11:53:10 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:37.723 11:53:10 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:37.723 { 00:25:37.723 "base_bdev": "Nvme0n1", 00:25:37.723 "block_size": 4096, 00:25:37.723 "cluster_size": 1073741824, 00:25:37.723 "free_clusters": 4, 00:25:37.723 "name": "lvs_0", 00:25:37.723 "total_data_clusters": 4, 00:25:37.723 "uuid": "45225280-28d1-4d10-88a9-0bcaeb9e940a" 00:25:37.723 } 00:25:37.723 ]' 00:25:37.723 11:53:10 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="45225280-28d1-4d10-88a9-0bcaeb9e940a") .free_clusters' 00:25:37.982 11:53:10 -- common/autotest_common.sh@1358 -- # fc=4 00:25:37.983 11:53:10 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="45225280-28d1-4d10-88a9-0bcaeb9e940a") .cluster_size' 00:25:37.983 4096 00:25:37.983 11:53:10 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:25:37.983 11:53:10 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:25:37.983 11:53:10 -- common/autotest_common.sh@1363 -- # echo 4096 00:25:37.983 11:53:10 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:25:38.243 ad528794-6158-41a5-b914-f4afe1ab8589 00:25:38.243 11:53:11 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:25:38.243 11:53:11 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:25:38.503 11:53:11 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:38.763 11:53:11 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:38.763 11:53:11 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:38.763 11:53:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:38.763 11:53:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.763 11:53:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:38.763 11:53:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:38.763 11:53:11 -- common/autotest_common.sh@1330 -- # shift 00:25:38.763 11:53:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:38.763 11:53:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:38.763 11:53:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:38.763 11:53:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:38.763 11:53:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:38.763 11:53:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:38.763 11:53:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:38.763 11:53:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:38.763 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:38.763 fio-3.35 00:25:38.763 Starting 1 thread 00:25:41.314 00:25:41.314 test: (groupid=0, jobs=1): err= 0: pid=84530: Wed Nov 20 11:53:14 2024 00:25:41.314 read: IOPS=7389, BW=28.9MiB/s (30.3MB/s)(58.0MiB/2008msec) 00:25:41.314 slat (nsec): min=1504, max=428079, avg=2232.81, stdev=4529.11 00:25:41.314 clat (usec): min=4016, max=16303, avg=9190.38, stdev=918.02 00:25:41.314 lat (usec): min=4030, max=16305, avg=9192.62, stdev=917.81 00:25:41.314 clat percentiles (usec): 00:25:41.314 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:25:41.314 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:25:41.314 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10552], 00:25:41.314 | 99.00th=[11731], 99.50th=[13173], 99.90th=[15270], 99.95th=[15533], 00:25:41.314 | 99.99th=[16188] 00:25:41.314 bw ( KiB/s): min=28704, max=30144, per=100.00%, avg=29575.50, stdev=614.06, samples=4 00:25:41.314 iops : min= 7176, max= 7536, avg=7393.50, stdev=153.39, samples=4 00:25:41.314 write: IOPS=7358, BW=28.7MiB/s (30.1MB/s)(57.7MiB/2008msec); 0 zone resets 00:25:41.314 slat (nsec): min=1559, max=307450, avg=2284.05, stdev=3068.06 00:25:41.314 clat (usec): min=3033, max=15357, avg=8089.62, stdev=792.57 00:25:41.314 lat (usec): min=3050, max=15359, avg=8091.91, stdev=792.44 00:25:41.314 clat percentiles (usec): 00:25:41.314 | 1.00th=[ 6259], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7504], 00:25:41.314 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:25:41.314 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:25:41.314 | 99.00th=[ 9896], 99.50th=[11207], 99.90th=[13960], 99.95th=[14353], 00:25:41.314 | 99.99th=[15270] 00:25:41.314 bw ( KiB/s): min=29248, max=29643, per=100.00%, avg=29443.25, stdev=195.35, samples=4 00:25:41.314 iops : min= 7312, max= 7410, avg=7360.50, stdev=48.70, samples=4 00:25:41.314 lat (msec) : 4=0.03%, 10=91.86%, 20=8.10% 00:25:41.314 cpu : usr=75.93%, sys=18.78%, ctx=190, majf=0, minf=5 00:25:41.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:41.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:41.315 issued rwts: total=14838,14776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:41.315 00:25:41.315 Run status group 0 (all jobs): 00:25:41.315 READ: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=58.0MiB (60.8MB), run=2008-2008msec 00:25:41.315 WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=57.7MiB (60.5MB), run=2008-2008msec 00:25:41.315 11:53:14 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:41.315 11:53:14 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:25:41.572 11:53:14 -- host/fio.sh@64 -- # ls_nested_guid=04de90a1-5246-4339-857b-2276cde3fd49 00:25:41.572 11:53:14 -- host/fio.sh@65 -- # get_lvs_free_mb 04de90a1-5246-4339-857b-2276cde3fd49 00:25:41.572 11:53:14 -- common/autotest_common.sh@1353 -- # local lvs_uuid=04de90a1-5246-4339-857b-2276cde3fd49 00:25:41.572 11:53:14 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:41.572 11:53:14 -- common/autotest_common.sh@1355 -- # local fc 00:25:41.572 11:53:14 -- common/autotest_common.sh@1356 -- # local cs 00:25:41.572 11:53:14 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:41.829 11:53:14 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:41.829 { 00:25:41.829 "base_bdev": "Nvme0n1", 00:25:41.829 "block_size": 4096, 00:25:41.829 "cluster_size": 1073741824, 00:25:41.829 "free_clusters": 0, 00:25:41.829 "name": "lvs_0", 00:25:41.829 "total_data_clusters": 4, 00:25:41.829 "uuid": "45225280-28d1-4d10-88a9-0bcaeb9e940a" 00:25:41.829 }, 00:25:41.829 { 00:25:41.829 "base_bdev": "ad528794-6158-41a5-b914-f4afe1ab8589", 00:25:41.829 "block_size": 4096, 00:25:41.829 "cluster_size": 4194304, 00:25:41.829 "free_clusters": 1022, 00:25:41.829 "name": "lvs_n_0", 00:25:41.829 "total_data_clusters": 1022, 00:25:41.829 "uuid": "04de90a1-5246-4339-857b-2276cde3fd49" 00:25:41.829 } 00:25:41.829 ]' 00:25:41.829 11:53:14 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="04de90a1-5246-4339-857b-2276cde3fd49") .free_clusters' 00:25:41.829 11:53:14 -- common/autotest_common.sh@1358 -- # fc=1022 00:25:41.829 11:53:14 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="04de90a1-5246-4339-857b-2276cde3fd49") .cluster_size' 00:25:41.829 11:53:14 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:41.829 11:53:14 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:25:41.829 4088 00:25:41.829 11:53:14 -- common/autotest_common.sh@1363 -- # echo 4088 00:25:41.829 11:53:14 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:25:42.087 a3a42b67-2a80-4b58-9390-26e5638e9c65 00:25:42.087 11:53:14 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:25:42.345 11:53:15 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:25:42.345 11:53:15 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:42.604 11:53:15 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:42.604 11:53:15 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:42.604 11:53:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:42.604 11:53:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.604 11:53:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:42.604 11:53:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:42.604 11:53:15 -- common/autotest_common.sh@1330 -- # shift 00:25:42.604 11:53:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:42.604 11:53:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.604 11:53:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:42.604 11:53:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:42.604 11:53:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:42.604 11:53:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:42.605 11:53:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:42.605 11:53:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.605 11:53:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:42.605 11:53:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:42.605 11:53:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:42.605 11:53:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:42.605 11:53:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:42.605 11:53:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:42.605 11:53:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:42.863 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:42.863 fio-3.35 00:25:42.863 Starting 1 thread 00:25:45.400 00:25:45.401 test: (groupid=0, jobs=1): err= 0: pid=84651: Wed Nov 20 11:53:18 2024 00:25:45.401 read: IOPS=6576, BW=25.7MiB/s (26.9MB/s)(51.6MiB/2009msec) 00:25:45.401 slat (nsec): min=1439, max=447426, avg=1746.65, stdev=5250.70 00:25:45.401 clat (usec): min=4587, max=18128, avg=10237.74, stdev=858.75 00:25:45.401 lat (usec): min=4601, max=18129, avg=10239.49, stdev=858.40 00:25:45.401 clat percentiles (usec): 00:25:45.401 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:25:45.401 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:25:45.401 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:25:45.401 | 99.00th=[12387], 99.50th=[12780], 99.90th=[14746], 99.95th=[16712], 00:25:45.401 | 99.99th=[18220] 00:25:45.401 bw ( KiB/s): min=25421, max=26960, per=99.89%, avg=26277.25, stdev=640.65, samples=4 00:25:45.401 iops : min= 6355, max= 6740, avg=6569.25, stdev=160.27, samples=4 00:25:45.401 write: IOPS=6584, BW=25.7MiB/s (27.0MB/s)(51.7MiB/2009msec); 0 zone resets 00:25:45.401 slat (nsec): min=1479, max=383270, avg=1824.59, stdev=3600.50 00:25:45.401 clat (usec): min=3448, max=16906, avg=9151.22, stdev=783.43 00:25:45.401 lat (usec): min=3465, max=16908, avg=9153.04, stdev=783.20 00:25:45.401 clat percentiles (usec): 00:25:45.401 | 1.00th=[ 7439], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8586], 00:25:45.401 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:25:45.401 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:25:45.401 | 99.00th=[10945], 99.50th=[11207], 99.90th=[14615], 99.95th=[15664], 00:25:45.401 | 99.99th=[16909] 00:25:45.401 bw ( KiB/s): min=26048, max=26560, per=99.94%, avg=26324.50, stdev=251.58, samples=4 00:25:45.401 iops : min= 6512, max= 6640, avg=6581.00, stdev=62.77, samples=4 00:25:45.401 lat (msec) : 4=0.02%, 10=63.81%, 20=36.18% 00:25:45.401 cpu : usr=75.65%, sys=20.47%, ctx=4, majf=0, minf=5 00:25:45.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:45.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.401 issued rwts: total=13212,13229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.401 00:25:45.401 Run status group 0 (all jobs): 00:25:45.401 READ: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=51.6MiB (54.1MB), run=2009-2009msec 00:25:45.401 WRITE: bw=25.7MiB/s (27.0MB/s), 25.7MiB/s-25.7MiB/s (27.0MB/s-27.0MB/s), io=51.7MiB (54.2MB), run=2009-2009msec 00:25:45.401 11:53:18 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:45.401 11:53:18 -- host/fio.sh@74 -- # sync 00:25:45.401 11:53:18 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:25:45.660 11:53:18 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:45.920 11:53:18 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:25:45.920 11:53:18 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:46.180 11:53:19 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:25:48.100 11:53:21 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:48.100 11:53:21 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:48.100 11:53:21 -- host/fio.sh@86 -- # nvmftestfini 00:25:48.100 11:53:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:48.100 11:53:21 -- nvmf/common.sh@116 -- # sync 00:25:48.100 11:53:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:48.100 11:53:21 -- nvmf/common.sh@119 -- # set +e 00:25:48.100 11:53:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:48.100 11:53:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:48.100 rmmod nvme_tcp 00:25:48.100 rmmod nvme_fabrics 00:25:48.100 rmmod nvme_keyring 00:25:48.359 11:53:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:48.359 11:53:21 -- nvmf/common.sh@123 -- # set -e 00:25:48.359 11:53:21 -- nvmf/common.sh@124 -- # return 0 00:25:48.360 11:53:21 -- nvmf/common.sh@477 -- # '[' -n 84209 ']' 00:25:48.360 11:53:21 -- nvmf/common.sh@478 -- # killprocess 84209 00:25:48.360 11:53:21 -- common/autotest_common.sh@936 -- # '[' -z 84209 ']' 00:25:48.360 11:53:21 -- common/autotest_common.sh@940 -- # kill -0 84209 00:25:48.360 11:53:21 -- common/autotest_common.sh@941 -- # uname 00:25:48.360 11:53:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.360 11:53:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84209 00:25:48.360 killing process with pid 84209 00:25:48.360 11:53:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.360 11:53:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.360 11:53:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84209' 00:25:48.360 11:53:21 -- common/autotest_common.sh@955 -- # kill 84209 00:25:48.360 11:53:21 -- common/autotest_common.sh@960 -- # wait 84209 00:25:48.619 11:53:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:48.619 11:53:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:48.619 11:53:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:48.619 11:53:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.619 11:53:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:48.619 11:53:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.619 11:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.619 11:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.619 11:53:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:48.619 00:25:48.619 real 0m19.284s 00:25:48.619 user 1m23.623s 00:25:48.619 sys 0m4.206s 00:25:48.619 11:53:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:48.619 11:53:21 -- common/autotest_common.sh@10 -- # set +x 00:25:48.619 ************************************ 00:25:48.619 END TEST nvmf_fio_host 00:25:48.619 ************************************ 00:25:48.619 11:53:21 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:48.619 11:53:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:48.619 11:53:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.619 11:53:21 -- common/autotest_common.sh@10 -- # set +x 00:25:48.619 ************************************ 00:25:48.619 START TEST nvmf_failover 00:25:48.619 ************************************ 00:25:48.619 11:53:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:48.619 * Looking for test storage... 00:25:48.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:48.619 11:53:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:48.880 11:53:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:48.880 11:53:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:48.880 11:53:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:48.880 11:53:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:48.880 11:53:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:48.880 11:53:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:48.880 11:53:21 -- scripts/common.sh@335 -- # IFS=.-: 00:25:48.880 11:53:21 -- scripts/common.sh@335 -- # read -ra ver1 00:25:48.880 11:53:21 -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.880 11:53:21 -- scripts/common.sh@336 -- # read -ra ver2 00:25:48.880 11:53:21 -- scripts/common.sh@337 -- # local 'op=<' 00:25:48.880 11:53:21 -- scripts/common.sh@339 -- # ver1_l=2 00:25:48.880 11:53:21 -- scripts/common.sh@340 -- # ver2_l=1 00:25:48.880 11:53:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:48.880 11:53:21 -- scripts/common.sh@343 -- # case "$op" in 00:25:48.880 11:53:21 -- scripts/common.sh@344 -- # : 1 00:25:48.880 11:53:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:48.880 11:53:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.880 11:53:21 -- scripts/common.sh@364 -- # decimal 1 00:25:48.880 11:53:21 -- scripts/common.sh@352 -- # local d=1 00:25:48.880 11:53:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.880 11:53:21 -- scripts/common.sh@354 -- # echo 1 00:25:48.880 11:53:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:48.880 11:53:21 -- scripts/common.sh@365 -- # decimal 2 00:25:48.880 11:53:21 -- scripts/common.sh@352 -- # local d=2 00:25:48.880 11:53:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.880 11:53:21 -- scripts/common.sh@354 -- # echo 2 00:25:48.880 11:53:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:48.880 11:53:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:48.880 11:53:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:48.880 11:53:21 -- scripts/common.sh@367 -- # return 0 00:25:48.880 11:53:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.880 11:53:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.880 --rc genhtml_branch_coverage=1 00:25:48.880 --rc genhtml_function_coverage=1 00:25:48.880 --rc genhtml_legend=1 00:25:48.880 --rc geninfo_all_blocks=1 00:25:48.880 --rc geninfo_unexecuted_blocks=1 00:25:48.880 00:25:48.880 ' 00:25:48.880 11:53:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.880 --rc genhtml_branch_coverage=1 00:25:48.880 --rc genhtml_function_coverage=1 00:25:48.880 --rc genhtml_legend=1 00:25:48.880 --rc geninfo_all_blocks=1 00:25:48.880 --rc geninfo_unexecuted_blocks=1 00:25:48.880 00:25:48.880 ' 00:25:48.880 11:53:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.880 --rc genhtml_branch_coverage=1 00:25:48.880 --rc genhtml_function_coverage=1 00:25:48.880 --rc genhtml_legend=1 00:25:48.880 --rc geninfo_all_blocks=1 00:25:48.880 --rc geninfo_unexecuted_blocks=1 00:25:48.880 00:25:48.880 ' 00:25:48.880 11:53:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.880 --rc genhtml_branch_coverage=1 00:25:48.880 --rc genhtml_function_coverage=1 00:25:48.880 --rc genhtml_legend=1 00:25:48.880 --rc geninfo_all_blocks=1 00:25:48.880 --rc geninfo_unexecuted_blocks=1 00:25:48.880 00:25:48.880 ' 00:25:48.880 11:53:21 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.880 11:53:21 -- nvmf/common.sh@7 -- # uname -s 00:25:48.880 11:53:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.881 11:53:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.881 11:53:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.881 11:53:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.881 11:53:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.881 11:53:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.881 11:53:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.881 11:53:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.881 11:53:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.881 11:53:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.881 11:53:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:25:48.881 11:53:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:25:48.881 11:53:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.881 11:53:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.881 11:53:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.881 11:53:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.881 11:53:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.881 11:53:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.881 11:53:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.881 11:53:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.881 11:53:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.881 11:53:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.881 11:53:21 -- paths/export.sh@5 -- # export PATH 00:25:48.881 11:53:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.881 11:53:21 -- nvmf/common.sh@46 -- # : 0 00:25:48.881 11:53:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:48.881 11:53:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:48.881 11:53:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:48.881 11:53:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.881 11:53:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.881 11:53:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:48.881 11:53:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:48.881 11:53:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:48.881 11:53:21 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.881 11:53:21 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.881 11:53:21 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.881 11:53:21 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.881 11:53:21 -- host/failover.sh@18 -- # nvmftestinit 00:25:48.881 11:53:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.881 11:53:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.881 11:53:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.881 11:53:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.881 11:53:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.881 11:53:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.881 11:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.881 11:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.881 11:53:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:48.881 11:53:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:48.881 11:53:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:48.881 11:53:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:48.881 11:53:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:48.881 11:53:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:48.881 11:53:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.881 11:53:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.881 11:53:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:48.881 11:53:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:48.881 11:53:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:48.881 11:53:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:48.881 11:53:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:48.881 11:53:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.881 11:53:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:48.881 11:53:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:48.881 11:53:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:48.881 11:53:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:48.881 11:53:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:48.881 11:53:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:48.881 Cannot find device "nvmf_tgt_br" 00:25:48.881 11:53:21 -- nvmf/common.sh@154 -- # true 00:25:48.881 11:53:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.881 Cannot find device "nvmf_tgt_br2" 00:25:48.881 11:53:21 -- nvmf/common.sh@155 -- # true 00:25:48.881 11:53:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:48.881 11:53:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:48.881 Cannot find device "nvmf_tgt_br" 00:25:48.881 11:53:21 -- nvmf/common.sh@157 -- # true 00:25:48.881 11:53:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:48.881 Cannot find device "nvmf_tgt_br2" 00:25:48.881 11:53:21 -- nvmf/common.sh@158 -- # true 00:25:48.881 11:53:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:49.141 11:53:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:49.141 11:53:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:49.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.141 11:53:21 -- nvmf/common.sh@161 -- # true 00:25:49.141 11:53:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:49.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.142 11:53:21 -- nvmf/common.sh@162 -- # true 00:25:49.142 11:53:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:49.142 11:53:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:49.142 11:53:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:49.142 11:53:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:49.142 11:53:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:49.142 11:53:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:49.142 11:53:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:49.142 11:53:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:49.142 11:53:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:49.142 11:53:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:49.142 11:53:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:49.142 11:53:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:49.142 11:53:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:49.142 11:53:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.142 11:53:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:49.142 11:53:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:49.142 11:53:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:49.142 11:53:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:49.142 11:53:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.142 11:53:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.142 11:53:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.142 11:53:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.142 11:53:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.142 11:53:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:49.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:25:49.142 00:25:49.142 --- 10.0.0.2 ping statistics --- 00:25:49.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.142 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:49.142 11:53:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:49.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:25:49.142 00:25:49.142 --- 10.0.0.3 ping statistics --- 00:25:49.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.142 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:49.142 11:53:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:25:49.142 00:25:49.142 --- 10.0.0.1 ping statistics --- 00:25:49.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.142 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:49.142 11:53:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.142 11:53:22 -- nvmf/common.sh@421 -- # return 0 00:25:49.142 11:53:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:49.142 11:53:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.142 11:53:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:49.142 11:53:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:49.142 11:53:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.142 11:53:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:49.142 11:53:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:49.142 11:53:22 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:49.142 11:53:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:49.142 11:53:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.142 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:25:49.142 11:53:22 -- nvmf/common.sh@469 -- # nvmfpid=84948 00:25:49.142 11:53:22 -- nvmf/common.sh@470 -- # waitforlisten 84948 00:25:49.142 11:53:22 -- common/autotest_common.sh@829 -- # '[' -z 84948 ']' 00:25:49.142 11:53:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.142 11:53:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:49.142 11:53:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.142 11:53:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.142 11:53:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.142 11:53:22 -- common/autotest_common.sh@10 -- # set +x 00:25:49.142 [2024-11-20 11:53:22.179084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:49.142 [2024-11-20 11:53:22.179144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.402 [2024-11-20 11:53:22.319447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.402 [2024-11-20 11:53:22.397238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.402 [2024-11-20 11:53:22.397373] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.402 [2024-11-20 11:53:22.397380] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.402 [2024-11-20 11:53:22.397385] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.402 [2024-11-20 11:53:22.398317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.402 [2024-11-20 11:53:22.398426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.402 [2024-11-20 11:53:22.398431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.343 11:53:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.343 11:53:23 -- common/autotest_common.sh@862 -- # return 0 00:25:50.343 11:53:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:50.343 11:53:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.343 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:25:50.343 11:53:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.343 11:53:23 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.343 [2024-11-20 11:53:23.227461] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.343 11:53:23 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:50.603 Malloc0 00:25:50.603 11:53:23 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.863 11:53:23 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.863 11:53:23 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.122 [2024-11-20 11:53:24.047144] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.122 11:53:24 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:51.382 [2024-11-20 11:53:24.238932] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.382 11:53:24 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:51.641 [2024-11-20 11:53:24.442739] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:51.641 11:53:24 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:51.641 11:53:24 -- host/failover.sh@31 -- # bdevperf_pid=85055 00:25:51.641 11:53:24 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.641 11:53:24 -- host/failover.sh@34 -- # waitforlisten 85055 /var/tmp/bdevperf.sock 00:25:51.641 11:53:24 -- common/autotest_common.sh@829 -- # '[' -z 85055 ']' 00:25:51.641 11:53:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.641 11:53:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.641 11:53:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.641 11:53:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.641 11:53:24 -- common/autotest_common.sh@10 -- # set +x 00:25:52.581 11:53:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.581 11:53:25 -- common/autotest_common.sh@862 -- # return 0 00:25:52.581 11:53:25 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.581 NVMe0n1 00:25:52.841 11:53:25 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.100 00:25:53.100 11:53:25 -- host/failover.sh@39 -- # run_test_pid=85105 00:25:53.101 11:53:25 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:53.101 11:53:25 -- host/failover.sh@41 -- # sleep 1 00:25:54.041 11:53:26 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.301 [2024-11-20 11:53:27.124693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.125978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.301 [2024-11-20 11:53:27.126603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.302 [2024-11-20 11:53:27.126637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.302 [2024-11-20 11:53:27.126682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f495b0 is same with the state(5) to be set 00:25:54.302 11:53:27 -- host/failover.sh@45 -- # sleep 3 00:25:57.597 11:53:30 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:57.597 00:25:57.597 11:53:30 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.597 [2024-11-20 11:53:30.593789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.593995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.594000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.594004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.594009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.594013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.597 [2024-11-20 11:53:30.594017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 [2024-11-20 11:53:30.594101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a420 is same with the state(5) to be set 00:25:57.598 11:53:30 -- host/failover.sh@50 -- # sleep 3 00:26:00.892 11:53:33 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.892 [2024-11-20 11:53:33.806458] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.892 11:53:33 -- host/failover.sh@55 -- # sleep 1 00:26:01.829 11:53:34 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:02.089 [2024-11-20 11:53:35.007306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.089 [2024-11-20 11:53:35.007432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 [2024-11-20 11:53:35.007554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4afb0 is same with the state(5) to be set 00:26:02.090 11:53:35 -- host/failover.sh@59 -- # wait 85105 00:26:08.670 0 00:26:08.670 11:53:41 -- host/failover.sh@61 -- # killprocess 85055 00:26:08.670 11:53:41 -- common/autotest_common.sh@936 -- # '[' -z 85055 ']' 00:26:08.670 11:53:41 -- common/autotest_common.sh@940 -- # kill -0 85055 00:26:08.670 11:53:41 -- common/autotest_common.sh@941 -- # uname 00:26:08.670 11:53:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.670 11:53:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85055 00:26:08.670 11:53:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:08.670 11:53:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:08.670 11:53:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85055' 00:26:08.670 killing process with pid 85055 00:26:08.670 11:53:41 -- common/autotest_common.sh@955 -- # kill 85055 00:26:08.670 11:53:41 -- common/autotest_common.sh@960 -- # wait 85055 00:26:08.670 11:53:41 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:08.670 [2024-11-20 11:53:24.499602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:08.670 [2024-11-20 11:53:24.499695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85055 ] 00:26:08.670 [2024-11-20 11:53:24.638511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.670 [2024-11-20 11:53:24.719433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.670 Running I/O for 15 seconds... 00:26:08.670 [2024-11-20 11:53:27.126936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.126985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.670 [2024-11-20 11:53:27.127150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.670 [2024-11-20 11:53:27.127160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.671 [2024-11-20 11:53:27.127810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.671 [2024-11-20 11:53:27.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.671 [2024-11-20 11:53:27.127848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.127982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.127998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.672 [2024-11-20 11:53:27.128452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.672 [2024-11-20 11:53:27.128463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.672 [2024-11-20 11:53:27.128471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.673 [2024-11-20 11:53:27.128894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.128980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.128993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.129003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.129012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.129022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.129031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.129041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.129050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.129062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.129071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.129081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.129090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.673 [2024-11-20 11:53:27.129100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.673 [2024-11-20 11:53:27.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.674 [2024-11-20 11:53:27.129265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.674 [2024-11-20 11:53:27.129360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.674 [2024-11-20 11:53:27.129381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:27.129517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fe9a0 is same with the state(5) to be set 00:26:08.674 [2024-11-20 11:53:27.129538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.674 [2024-11-20 11:53:27.129545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.674 [2024-11-20 11:53:27.129552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28840 len:8 PRP1 0x0 PRP2 0x0 00:26:08.674 [2024-11-20 11:53:27.129560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129605] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8fe9a0 was disconnected and freed. reset controller. 00:26:08.674 [2024-11-20 11:53:27.129617] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:08.674 [2024-11-20 11:53:27.129667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.674 [2024-11-20 11:53:27.129678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.674 [2024-11-20 11:53:27.129697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.674 [2024-11-20 11:53:27.129715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.674 [2024-11-20 11:53:27.129735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:27.129744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.674 [2024-11-20 11:53:27.131538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.674 [2024-11-20 11:53:27.131566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889440 (9): Bad file descriptor 00:26:08.674 [2024-11-20 11:53:27.150014] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:08.674 [2024-11-20 11:53:30.594174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:30.594212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:30.594232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:30.594258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:30.594269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:30.594278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:30.594289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:30.594298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.674 [2024-11-20 11:53:30.594308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.674 [2024-11-20 11:53:30.594317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.675 [2024-11-20 11:53:30.594754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.675 [2024-11-20 11:53:30.594798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.675 [2024-11-20 11:53:30.594817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.675 [2024-11-20 11:53:30.594855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.675 [2024-11-20 11:53:30.594865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.676 [2024-11-20 11:53:30.594874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.594893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.594912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.594932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.594951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.594970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.594988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.594999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.676 [2024-11-20 11:53:30.595051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.676 [2024-11-20 11:53:30.595107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.676 [2024-11-20 11:53:30.595417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.676 [2024-11-20 11:53:30.595456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.676 [2024-11-20 11:53:30.595466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.677 [2024-11-20 11:53:30.595965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.595983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.595997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.596005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.596015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.596023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.596032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.596041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.596050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.596058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.596068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.596076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.596086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.677 [2024-11-20 11:53:30.596095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.677 [2024-11-20 11:53:30.596105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.678 [2024-11-20 11:53:30.596542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.678 [2024-11-20 11:53:30.596681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881ae0 is same with the state(5) to be set 00:26:08.678 [2024-11-20 11:53:30.596720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.678 [2024-11-20 11:53:30.596727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.678 [2024-11-20 11:53:30.596734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13880 len:8 PRP1 0x0 PRP2 0x0 00:26:08.678 [2024-11-20 11:53:30.596743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.678 [2024-11-20 11:53:30.596786] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x881ae0 was disconnected and freed. reset controller. 00:26:08.678 [2024-11-20 11:53:30.596797] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:08.678 [2024-11-20 11:53:30.596836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.678 [2024-11-20 11:53:30.596847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:30.596856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:30.596864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:30.596873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:30.596885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:30.596894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:30.596902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:30.596911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.679 [2024-11-20 11:53:30.598515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.679 [2024-11-20 11:53:30.598542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889440 (9): Bad file descriptor 00:26:08.679 [2024-11-20 11:53:30.617266] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:08.679 [2024-11-20 11:53:35.006699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:35.006753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.006765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:35.006773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.006783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:35.006791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.006800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.679 [2024-11-20 11:53:35.006808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.006816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889440 is same with the state(5) to be set 00:26:08.679 [2024-11-20 11:53:35.007613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.007986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.007996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.008005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.679 [2024-11-20 11:53:35.008026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.008045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.008064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.008083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.679 [2024-11-20 11:53:35.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.679 [2024-11-20 11:53:35.008132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.680 [2024-11-20 11:53:35.008225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.680 [2024-11-20 11:53:35.008286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.680 [2024-11-20 11:53:35.008364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.680 [2024-11-20 11:53:35.008384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.680 [2024-11-20 11:53:35.008726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.680 [2024-11-20 11:53:35.008774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.680 [2024-11-20 11:53:35.008782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.008801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.008944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.008982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.008993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.009221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.009240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.009258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.681 [2024-11-20 11:53:35.009294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.681 [2024-11-20 11:53:35.009312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.681 [2024-11-20 11:53:35.009322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.682 [2024-11-20 11:53:35.009893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.682 [2024-11-20 11:53:35.009926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.682 [2024-11-20 11:53:35.009934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.009944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.009952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.009962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.009970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.009979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.683 [2024-11-20 11:53:35.009988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.009997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.683 [2024-11-20 11:53:35.010139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fc1f0 is same with the state(5) to be set 00:26:08.683 [2024-11-20 11:53:35.010159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.683 [2024-11-20 11:53:35.010165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.683 [2024-11-20 11:53:35.010173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8312 len:8 PRP1 0x0 PRP2 0x0 00:26:08.683 [2024-11-20 11:53:35.010181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.683 [2024-11-20 11:53:35.010223] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8fc1f0 was disconnected and freed. reset controller. 00:26:08.683 [2024-11-20 11:53:35.010234] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:08.683 [2024-11-20 11:53:35.010243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.683 [2024-11-20 11:53:35.012075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.683 [2024-11-20 11:53:35.012102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889440 (9): Bad file descriptor 00:26:08.683 [2024-11-20 11:53:35.033914] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:08.683 00:26:08.683 Latency(us) 00:26:08.683 [2024-11-20T11:53:41.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.683 [2024-11-20T11:53:41.726Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:08.683 Verification LBA range: start 0x0 length 0x4000 00:26:08.683 NVMe0n1 : 15.01 17937.35 70.07 283.96 0.00 7012.36 414.97 13507.86 00:26:08.683 [2024-11-20T11:53:41.726Z] =================================================================================================================== 00:26:08.683 [2024-11-20T11:53:41.726Z] Total : 17937.35 70.07 283.96 0.00 7012.36 414.97 13507.86 00:26:08.683 Received shutdown signal, test time was about 15.000000 seconds 00:26:08.683 00:26:08.683 Latency(us) 00:26:08.683 [2024-11-20T11:53:41.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.683 [2024-11-20T11:53:41.726Z] =================================================================================================================== 00:26:08.683 [2024-11-20T11:53:41.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.683 11:53:41 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:08.683 11:53:41 -- host/failover.sh@65 -- # count=3 00:26:08.683 11:53:41 -- host/failover.sh@67 -- # (( count != 3 )) 00:26:08.683 11:53:41 -- host/failover.sh@73 -- # bdevperf_pid=85308 00:26:08.683 11:53:41 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:08.683 11:53:41 -- host/failover.sh@75 -- # waitforlisten 85308 /var/tmp/bdevperf.sock 00:26:08.683 11:53:41 -- common/autotest_common.sh@829 -- # '[' -z 85308 ']' 00:26:08.683 11:53:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.683 11:53:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.683 11:53:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.683 11:53:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.683 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:26:09.254 11:53:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.254 11:53:42 -- common/autotest_common.sh@862 -- # return 0 00:26:09.254 11:53:42 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.513 [2024-11-20 11:53:42.376800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:09.513 11:53:42 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:09.773 [2024-11-20 11:53:42.572572] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:09.773 11:53:42 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.032 NVMe0n1 00:26:10.032 11:53:42 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.290 00:26:10.290 11:53:43 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.549 00:26:10.549 11:53:43 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.549 11:53:43 -- host/failover.sh@82 -- # grep -q NVMe0 00:26:10.549 11:53:43 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.808 11:53:43 -- host/failover.sh@87 -- # sleep 3 00:26:14.103 11:53:46 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.103 11:53:46 -- host/failover.sh@88 -- # grep -q NVMe0 00:26:14.103 11:53:46 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:14.103 11:53:46 -- host/failover.sh@90 -- # run_test_pid=85439 00:26:14.103 11:53:46 -- host/failover.sh@92 -- # wait 85439 00:26:15.043 0 00:26:15.043 11:53:48 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:15.043 [2024-11-20 11:53:41.334661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:15.043 [2024-11-20 11:53:41.334727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85308 ] 00:26:15.043 [2024-11-20 11:53:41.456229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.043 [2024-11-20 11:53:41.537098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.043 [2024-11-20 11:53:43.718101] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:15.043 [2024-11-20 11:53:43.718202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.043 [2024-11-20 11:53:43.718217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.043 [2024-11-20 11:53:43.718228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.043 [2024-11-20 11:53:43.718236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.043 [2024-11-20 11:53:43.718245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.043 [2024-11-20 11:53:43.718254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.043 [2024-11-20 11:53:43.718262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.043 [2024-11-20 11:53:43.718270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.043 [2024-11-20 11:53:43.718278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:15.043 [2024-11-20 11:53:43.718316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:15.043 [2024-11-20 11:53:43.718334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171b440 (9): Bad file descriptor 00:26:15.043 [2024-11-20 11:53:43.726734] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:15.043 Running I/O for 1 seconds... 00:26:15.043 00:26:15.043 Latency(us) 00:26:15.043 [2024-11-20T11:53:48.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.043 [2024-11-20T11:53:48.086Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:15.043 Verification LBA range: start 0x0 length 0x4000 00:26:15.043 NVMe0n1 : 1.01 17882.13 69.85 0.00 0.00 7129.11 1058.88 12878.25 00:26:15.043 [2024-11-20T11:53:48.086Z] =================================================================================================================== 00:26:15.043 [2024-11-20T11:53:48.086Z] Total : 17882.13 69.85 0.00 0.00 7129.11 1058.88 12878.25 00:26:15.043 11:53:48 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.043 11:53:48 -- host/failover.sh@95 -- # grep -q NVMe0 00:26:15.303 11:53:48 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.563 11:53:48 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.563 11:53:48 -- host/failover.sh@99 -- # grep -q NVMe0 00:26:15.855 11:53:48 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.855 11:53:48 -- host/failover.sh@101 -- # sleep 3 00:26:19.172 11:53:51 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:19.172 11:53:51 -- host/failover.sh@103 -- # grep -q NVMe0 00:26:19.172 11:53:52 -- host/failover.sh@108 -- # killprocess 85308 00:26:19.172 11:53:52 -- common/autotest_common.sh@936 -- # '[' -z 85308 ']' 00:26:19.172 11:53:52 -- common/autotest_common.sh@940 -- # kill -0 85308 00:26:19.172 11:53:52 -- common/autotest_common.sh@941 -- # uname 00:26:19.172 11:53:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.172 11:53:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85308 00:26:19.172 killing process with pid 85308 00:26:19.172 11:53:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:19.172 11:53:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:19.172 11:53:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85308' 00:26:19.172 11:53:52 -- common/autotest_common.sh@955 -- # kill 85308 00:26:19.172 11:53:52 -- common/autotest_common.sh@960 -- # wait 85308 00:26:19.432 11:53:52 -- host/failover.sh@110 -- # sync 00:26:19.432 11:53:52 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.692 11:53:52 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:19.692 11:53:52 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:19.692 11:53:52 -- host/failover.sh@116 -- # nvmftestfini 00:26:19.692 11:53:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:19.692 11:53:52 -- nvmf/common.sh@116 -- # sync 00:26:19.692 11:53:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:19.692 11:53:52 -- nvmf/common.sh@119 -- # set +e 00:26:19.692 11:53:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:19.692 11:53:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:19.692 rmmod nvme_tcp 00:26:19.692 rmmod nvme_fabrics 00:26:19.692 rmmod nvme_keyring 00:26:19.692 11:53:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:19.692 11:53:52 -- nvmf/common.sh@123 -- # set -e 00:26:19.692 11:53:52 -- nvmf/common.sh@124 -- # return 0 00:26:19.692 11:53:52 -- nvmf/common.sh@477 -- # '[' -n 84948 ']' 00:26:19.692 11:53:52 -- nvmf/common.sh@478 -- # killprocess 84948 00:26:19.692 11:53:52 -- common/autotest_common.sh@936 -- # '[' -z 84948 ']' 00:26:19.692 11:53:52 -- common/autotest_common.sh@940 -- # kill -0 84948 00:26:19.692 11:53:52 -- common/autotest_common.sh@941 -- # uname 00:26:19.692 11:53:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.692 11:53:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84948 00:26:19.692 killing process with pid 84948 00:26:19.692 11:53:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:19.692 11:53:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:19.692 11:53:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84948' 00:26:19.692 11:53:52 -- common/autotest_common.sh@955 -- # kill 84948 00:26:19.692 11:53:52 -- common/autotest_common.sh@960 -- # wait 84948 00:26:19.953 11:53:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:19.953 11:53:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:19.953 11:53:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:19.953 11:53:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.953 11:53:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:19.953 11:53:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.953 11:53:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.953 11:53:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.953 11:53:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:19.953 00:26:19.953 real 0m31.428s 00:26:19.953 user 2m1.202s 00:26:19.953 sys 0m4.275s 00:26:19.953 11:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:19.953 11:53:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.953 ************************************ 00:26:19.953 END TEST nvmf_failover 00:26:19.953 ************************************ 00:26:20.213 11:53:53 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:20.213 11:53:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:20.213 11:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.213 11:53:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.213 ************************************ 00:26:20.213 START TEST nvmf_discovery 00:26:20.213 ************************************ 00:26:20.213 11:53:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:20.213 * Looking for test storage... 00:26:20.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:20.213 11:53:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:20.213 11:53:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:20.213 11:53:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:20.213 11:53:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:20.213 11:53:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:20.213 11:53:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:20.213 11:53:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:20.213 11:53:53 -- scripts/common.sh@335 -- # IFS=.-: 00:26:20.213 11:53:53 -- scripts/common.sh@335 -- # read -ra ver1 00:26:20.213 11:53:53 -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.213 11:53:53 -- scripts/common.sh@336 -- # read -ra ver2 00:26:20.213 11:53:53 -- scripts/common.sh@337 -- # local 'op=<' 00:26:20.213 11:53:53 -- scripts/common.sh@339 -- # ver1_l=2 00:26:20.213 11:53:53 -- scripts/common.sh@340 -- # ver2_l=1 00:26:20.213 11:53:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:20.213 11:53:53 -- scripts/common.sh@343 -- # case "$op" in 00:26:20.213 11:53:53 -- scripts/common.sh@344 -- # : 1 00:26:20.213 11:53:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:20.213 11:53:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.213 11:53:53 -- scripts/common.sh@364 -- # decimal 1 00:26:20.213 11:53:53 -- scripts/common.sh@352 -- # local d=1 00:26:20.213 11:53:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.213 11:53:53 -- scripts/common.sh@354 -- # echo 1 00:26:20.213 11:53:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:20.213 11:53:53 -- scripts/common.sh@365 -- # decimal 2 00:26:20.213 11:53:53 -- scripts/common.sh@352 -- # local d=2 00:26:20.213 11:53:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.213 11:53:53 -- scripts/common.sh@354 -- # echo 2 00:26:20.213 11:53:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:20.213 11:53:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:20.213 11:53:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:20.213 11:53:53 -- scripts/common.sh@367 -- # return 0 00:26:20.213 11:53:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.213 11:53:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.213 --rc genhtml_branch_coverage=1 00:26:20.213 --rc genhtml_function_coverage=1 00:26:20.213 --rc genhtml_legend=1 00:26:20.213 --rc geninfo_all_blocks=1 00:26:20.213 --rc geninfo_unexecuted_blocks=1 00:26:20.213 00:26:20.213 ' 00:26:20.213 11:53:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.213 --rc genhtml_branch_coverage=1 00:26:20.213 --rc genhtml_function_coverage=1 00:26:20.213 --rc genhtml_legend=1 00:26:20.213 --rc geninfo_all_blocks=1 00:26:20.213 --rc geninfo_unexecuted_blocks=1 00:26:20.213 00:26:20.213 ' 00:26:20.213 11:53:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.213 --rc genhtml_branch_coverage=1 00:26:20.213 --rc genhtml_function_coverage=1 00:26:20.213 --rc genhtml_legend=1 00:26:20.213 --rc geninfo_all_blocks=1 00:26:20.213 --rc geninfo_unexecuted_blocks=1 00:26:20.213 00:26:20.213 ' 00:26:20.213 11:53:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.213 --rc genhtml_branch_coverage=1 00:26:20.213 --rc genhtml_function_coverage=1 00:26:20.213 --rc genhtml_legend=1 00:26:20.213 --rc geninfo_all_blocks=1 00:26:20.213 --rc geninfo_unexecuted_blocks=1 00:26:20.213 00:26:20.213 ' 00:26:20.213 11:53:53 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:20.213 11:53:53 -- nvmf/common.sh@7 -- # uname -s 00:26:20.474 11:53:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.474 11:53:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.474 11:53:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.474 11:53:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.474 11:53:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.474 11:53:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.474 11:53:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.474 11:53:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.474 11:53:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.474 11:53:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.474 11:53:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:26:20.474 11:53:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:26:20.474 11:53:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.474 11:53:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.474 11:53:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:20.474 11:53:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:20.474 11:53:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.474 11:53:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.474 11:53:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.474 11:53:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.474 11:53:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.474 11:53:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.475 11:53:53 -- paths/export.sh@5 -- # export PATH 00:26:20.475 11:53:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.475 11:53:53 -- nvmf/common.sh@46 -- # : 0 00:26:20.475 11:53:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:20.475 11:53:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:20.475 11:53:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:20.475 11:53:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.475 11:53:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.475 11:53:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:20.475 11:53:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:20.475 11:53:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:20.475 11:53:53 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:20.475 11:53:53 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:20.475 11:53:53 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:20.475 11:53:53 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:20.475 11:53:53 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:20.475 11:53:53 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:20.475 11:53:53 -- host/discovery.sh@25 -- # nvmftestinit 00:26:20.475 11:53:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:20.475 11:53:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.475 11:53:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:20.475 11:53:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:20.475 11:53:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:20.475 11:53:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.475 11:53:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.475 11:53:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.475 11:53:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:20.475 11:53:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:20.475 11:53:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:20.475 11:53:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:20.475 11:53:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:20.475 11:53:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:20.475 11:53:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.475 11:53:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.475 11:53:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:20.475 11:53:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:20.475 11:53:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:20.475 11:53:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:20.475 11:53:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:20.475 11:53:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.475 11:53:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:20.475 11:53:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:20.475 11:53:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:20.475 11:53:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:20.475 11:53:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:20.475 11:53:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:20.475 Cannot find device "nvmf_tgt_br" 00:26:20.475 11:53:53 -- nvmf/common.sh@154 -- # true 00:26:20.475 11:53:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.475 Cannot find device "nvmf_tgt_br2" 00:26:20.475 11:53:53 -- nvmf/common.sh@155 -- # true 00:26:20.475 11:53:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:20.475 11:53:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:20.475 Cannot find device "nvmf_tgt_br" 00:26:20.475 11:53:53 -- nvmf/common.sh@157 -- # true 00:26:20.475 11:53:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:20.475 Cannot find device "nvmf_tgt_br2" 00:26:20.475 11:53:53 -- nvmf/common.sh@158 -- # true 00:26:20.475 11:53:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:20.475 11:53:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:20.475 11:53:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.475 11:53:53 -- nvmf/common.sh@161 -- # true 00:26:20.475 11:53:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.475 11:53:53 -- nvmf/common.sh@162 -- # true 00:26:20.475 11:53:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:20.475 11:53:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:20.475 11:53:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:20.475 11:53:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:20.475 11:53:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:20.735 11:53:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:20.735 11:53:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:20.735 11:53:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:20.735 11:53:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:20.735 11:53:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:20.735 11:53:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:20.735 11:53:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:20.735 11:53:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:20.735 11:53:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:20.735 11:53:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:20.735 11:53:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:20.735 11:53:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:20.735 11:53:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:20.735 11:53:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:20.735 11:53:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:20.735 11:53:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:20.735 11:53:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:20.735 11:53:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:20.735 11:53:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:20.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:26:20.735 00:26:20.735 --- 10.0.0.2 ping statistics --- 00:26:20.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.736 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:26:20.736 11:53:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:20.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:20.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:26:20.736 00:26:20.736 --- 10.0.0.3 ping statistics --- 00:26:20.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.736 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:20.736 11:53:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:20.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:26:20.736 00:26:20.736 --- 10.0.0.1 ping statistics --- 00:26:20.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.736 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:26:20.736 11:53:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.736 11:53:53 -- nvmf/common.sh@421 -- # return 0 00:26:20.736 11:53:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:20.736 11:53:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.736 11:53:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:20.736 11:53:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:20.736 11:53:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.736 11:53:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:20.736 11:53:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:20.736 11:53:53 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:20.736 11:53:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:20.736 11:53:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:20.736 11:53:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.736 11:53:53 -- nvmf/common.sh@469 -- # nvmfpid=85745 00:26:20.736 11:53:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:20.736 11:53:53 -- nvmf/common.sh@470 -- # waitforlisten 85745 00:26:20.736 11:53:53 -- common/autotest_common.sh@829 -- # '[' -z 85745 ']' 00:26:20.736 11:53:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.736 11:53:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.736 11:53:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.736 11:53:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.736 11:53:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.736 [2024-11-20 11:53:53.690529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:20.736 [2024-11-20 11:53:53.690594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.995 [2024-11-20 11:53:53.827614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.995 [2024-11-20 11:53:53.904563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:20.995 [2024-11-20 11:53:53.904706] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.995 [2024-11-20 11:53:53.904714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.995 [2024-11-20 11:53:53.904719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.995 [2024-11-20 11:53:53.904740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.565 11:53:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.565 11:53:54 -- common/autotest_common.sh@862 -- # return 0 00:26:21.565 11:53:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:21.565 11:53:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:21.565 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.565 11:53:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.565 11:53:54 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.565 11:53:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.565 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.565 [2024-11-20 11:53:54.559332] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.565 11:53:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.565 11:53:54 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:21.565 11:53:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.565 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.565 [2024-11-20 11:53:54.571419] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:21.565 11:53:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.565 11:53:54 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:21.565 11:53:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.565 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.565 null0 00:26:21.565 11:53:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.565 11:53:54 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:21.565 11:53:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.565 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.565 null1 00:26:21.565 11:53:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.565 11:53:54 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:21.565 11:53:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.565 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.824 11:53:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.824 11:53:54 -- host/discovery.sh@45 -- # hostpid=85801 00:26:21.824 11:53:54 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:21.824 11:53:54 -- host/discovery.sh@46 -- # waitforlisten 85801 /tmp/host.sock 00:26:21.824 11:53:54 -- common/autotest_common.sh@829 -- # '[' -z 85801 ']' 00:26:21.824 11:53:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:21.824 11:53:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.824 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:21.824 11:53:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:21.824 11:53:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.825 11:53:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.825 [2024-11-20 11:53:54.666907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:21.825 [2024-11-20 11:53:54.666975] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85801 ] 00:26:21.825 [2024-11-20 11:53:54.802520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.084 [2024-11-20 11:53:54.880732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:22.084 [2024-11-20 11:53:54.880857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.653 11:53:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.653 11:53:55 -- common/autotest_common.sh@862 -- # return 0 00:26:22.653 11:53:55 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.653 11:53:55 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:22.653 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.653 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.653 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.653 11:53:55 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:22.653 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.653 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.653 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.653 11:53:55 -- host/discovery.sh@72 -- # notify_id=0 00:26:22.653 11:53:55 -- host/discovery.sh@78 -- # get_subsystem_names 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.653 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # sort 00:26:22.653 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # xargs 00:26:22.653 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.653 11:53:55 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:26:22.653 11:53:55 -- host/discovery.sh@79 -- # get_bdev_list 00:26:22.653 11:53:55 -- host/discovery.sh@55 -- # sort 00:26:22.653 11:53:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.653 11:53:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.653 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.653 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.653 11:53:55 -- host/discovery.sh@55 -- # xargs 00:26:22.653 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.653 11:53:55 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:26:22.653 11:53:55 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.653 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.653 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.653 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.653 11:53:55 -- host/discovery.sh@82 -- # get_subsystem_names 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.653 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # xargs 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # sort 00:26:22.653 11:53:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.653 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.653 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.913 11:53:55 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:26:22.913 11:53:55 -- host/discovery.sh@83 -- # get_bdev_list 00:26:22.913 11:53:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.913 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.913 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # sort 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # xargs 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.914 11:53:55 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:22.914 11:53:55 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:22.914 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.914 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.914 11:53:55 -- host/discovery.sh@86 -- # get_subsystem_names 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.914 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.914 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # sort 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # xargs 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.914 11:53:55 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:26:22.914 11:53:55 -- host/discovery.sh@87 -- # get_bdev_list 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.914 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # xargs 00:26:22.914 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # sort 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.914 11:53:55 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:22.914 11:53:55 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.914 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.914 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 [2024-11-20 11:53:55.865172] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.914 11:53:55 -- host/discovery.sh@92 -- # get_subsystem_names 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.914 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.914 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # sort 00:26:22.914 11:53:55 -- host/discovery.sh@59 -- # xargs 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.914 11:53:55 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:22.914 11:53:55 -- host/discovery.sh@93 -- # get_bdev_list 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.914 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.914 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # sort 00:26:22.914 11:53:55 -- host/discovery.sh@55 -- # xargs 00:26:22.914 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.174 11:53:55 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:26:23.174 11:53:55 -- host/discovery.sh@94 -- # get_notification_count 00:26:23.174 11:53:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:23.174 11:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.174 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:26:23.174 11:53:55 -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.174 11:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.174 11:53:56 -- host/discovery.sh@74 -- # notification_count=0 00:26:23.174 11:53:56 -- host/discovery.sh@75 -- # notify_id=0 00:26:23.174 11:53:56 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:26:23.174 11:53:56 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:23.174 11:53:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.174 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:26:23.174 11:53:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.174 11:53:56 -- host/discovery.sh@100 -- # sleep 1 00:26:23.743 [2024-11-20 11:53:56.537882] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.743 [2024-11-20 11:53:56.537908] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.743 [2024-11-20 11:53:56.537920] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.743 [2024-11-20 11:53:56.623801] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:23.743 [2024-11-20 11:53:56.678849] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.743 [2024-11-20 11:53:56.678873] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.313 11:53:57 -- host/discovery.sh@101 -- # get_subsystem_names 00:26:24.313 11:53:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.313 11:53:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.313 11:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.313 11:53:57 -- host/discovery.sh@59 -- # sort 00:26:24.313 11:53:57 -- host/discovery.sh@59 -- # xargs 00:26:24.313 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.313 11:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.313 11:53:57 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.313 11:53:57 -- host/discovery.sh@102 -- # get_bdev_list 00:26:24.313 11:53:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.313 11:53:57 -- host/discovery.sh@55 -- # sort 00:26:24.313 11:53:57 -- host/discovery.sh@55 -- # xargs 00:26:24.313 11:53:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.313 11:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.313 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.313 11:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.313 11:53:57 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:24.313 11:53:57 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:26:24.313 11:53:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.313 11:53:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.314 11:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.314 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.314 11:53:57 -- host/discovery.sh@63 -- # sort -n 00:26:24.314 11:53:57 -- host/discovery.sh@63 -- # xargs 00:26:24.314 11:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.314 11:53:57 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:26:24.314 11:53:57 -- host/discovery.sh@104 -- # get_notification_count 00:26:24.314 11:53:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:24.314 11:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.314 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.314 11:53:57 -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.314 11:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.314 11:53:57 -- host/discovery.sh@74 -- # notification_count=1 00:26:24.314 11:53:57 -- host/discovery.sh@75 -- # notify_id=1 00:26:24.314 11:53:57 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:26:24.314 11:53:57 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:24.314 11:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.314 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.314 11:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.314 11:53:57 -- host/discovery.sh@109 -- # sleep 1 00:26:25.264 11:53:58 -- host/discovery.sh@110 -- # get_bdev_list 00:26:25.264 11:53:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.264 11:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.264 11:53:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.264 11:53:58 -- host/discovery.sh@55 -- # sort 00:26:25.264 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.264 11:53:58 -- host/discovery.sh@55 -- # xargs 00:26:25.264 11:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.539 11:53:58 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.539 11:53:58 -- host/discovery.sh@111 -- # get_notification_count 00:26:25.539 11:53:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:25.539 11:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.539 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 11:53:58 -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.539 11:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.539 11:53:58 -- host/discovery.sh@74 -- # notification_count=1 00:26:25.539 11:53:58 -- host/discovery.sh@75 -- # notify_id=2 00:26:25.539 11:53:58 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:26:25.539 11:53:58 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:25.539 11:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.539 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.539 [2024-11-20 11:53:58.349871] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:25.539 [2024-11-20 11:53:58.350686] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:25.539 [2024-11-20 11:53:58.350718] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.539 11:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.539 11:53:58 -- host/discovery.sh@117 -- # sleep 1 00:26:25.539 [2024-11-20 11:53:58.437553] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:25.539 [2024-11-20 11:53:58.496700] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:25.539 [2024-11-20 11:53:58.496731] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:25.539 [2024-11-20 11:53:58.496736] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:26.479 11:53:59 -- host/discovery.sh@118 -- # get_subsystem_names 00:26:26.479 11:53:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.479 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.479 11:53:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.479 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.479 11:53:59 -- host/discovery.sh@59 -- # sort 00:26:26.479 11:53:59 -- host/discovery.sh@59 -- # xargs 00:26:26.479 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.479 11:53:59 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.479 11:53:59 -- host/discovery.sh@119 -- # get_bdev_list 00:26:26.479 11:53:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.479 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.479 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.479 11:53:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.479 11:53:59 -- host/discovery.sh@55 -- # xargs 00:26:26.479 11:53:59 -- host/discovery.sh@55 -- # sort 00:26:26.479 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.479 11:53:59 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.479 11:53:59 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:26:26.479 11:53:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.479 11:53:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.479 11:53:59 -- host/discovery.sh@63 -- # sort -n 00:26:26.479 11:53:59 -- host/discovery.sh@63 -- # xargs 00:26:26.479 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.479 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.479 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.479 11:53:59 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:26.479 11:53:59 -- host/discovery.sh@121 -- # get_notification_count 00:26:26.740 11:53:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:26.740 11:53:59 -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.740 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.740 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.740 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.740 11:53:59 -- host/discovery.sh@74 -- # notification_count=0 00:26:26.740 11:53:59 -- host/discovery.sh@75 -- # notify_id=2 00:26:26.740 11:53:59 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:26:26.740 11:53:59 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.740 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.740 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.740 [2024-11-20 11:53:59.572203] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:26.740 [2024-11-20 11:53:59.572229] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:26.740 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.740 11:53:59 -- host/discovery.sh@127 -- # sleep 1 00:26:26.740 [2024-11-20 11:53:59.581509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.740 [2024-11-20 11:53:59.581540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.740 [2024-11-20 11:53:59.581547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.740 [2024-11-20 11:53:59.581553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.740 [2024-11-20 11:53:59.581560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.741 [2024-11-20 11:53:59.581565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.741 [2024-11-20 11:53:59.581570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.741 [2024-11-20 11:53:59.581575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.741 [2024-11-20 11:53:59.581580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.591463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.601457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.741 [2024-11-20 11:53:59.601551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.601575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.601583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd99c0 with addr=10.0.0.2, port=4420 00:26:26.741 [2024-11-20 11:53:59.601589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.601599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.601609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.741 [2024-11-20 11:53:59.601614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.741 [2024-11-20 11:53:59.601620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.741 [2024-11-20 11:53:59.601629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.741 [2024-11-20 11:53:59.611476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.741 [2024-11-20 11:53:59.611542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.611564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.611571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd99c0 with addr=10.0.0.2, port=4420 00:26:26.741 [2024-11-20 11:53:59.611576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.611584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.611592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.741 [2024-11-20 11:53:59.611597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.741 [2024-11-20 11:53:59.611601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.741 [2024-11-20 11:53:59.611610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.741 [2024-11-20 11:53:59.621493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.741 [2024-11-20 11:53:59.621566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.621589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.621597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd99c0 with addr=10.0.0.2, port=4420 00:26:26.741 [2024-11-20 11:53:59.621602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.621611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.621619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.741 [2024-11-20 11:53:59.621623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.741 [2024-11-20 11:53:59.621628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.741 [2024-11-20 11:53:59.621637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.741 [2024-11-20 11:53:59.631510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.741 [2024-11-20 11:53:59.631573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.631593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.631601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd99c0 with addr=10.0.0.2, port=4420 00:26:26.741 [2024-11-20 11:53:59.631606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.631614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.631621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.741 [2024-11-20 11:53:59.631625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.741 [2024-11-20 11:53:59.631630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.741 [2024-11-20 11:53:59.631638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.741 [2024-11-20 11:53:59.641525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.741 [2024-11-20 11:53:59.641594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.641614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.641621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd99c0 with addr=10.0.0.2, port=4420 00:26:26.741 [2024-11-20 11:53:59.641627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.641635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.641642] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.741 [2024-11-20 11:53:59.641647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.741 [2024-11-20 11:53:59.641651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.741 [2024-11-20 11:53:59.641659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.741 [2024-11-20 11:53:59.651539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.741 [2024-11-20 11:53:59.651599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.651619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.741 [2024-11-20 11:53:59.651626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd99c0 with addr=10.0.0.2, port=4420 00:26:26.741 [2024-11-20 11:53:59.651631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd99c0 is same with the state(5) to be set 00:26:26.741 [2024-11-20 11:53:59.651639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd99c0 (9): Bad file descriptor 00:26:26.741 [2024-11-20 11:53:59.651646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.741 [2024-11-20 11:53:59.651650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.741 [2024-11-20 11:53:59.651655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.741 [2024-11-20 11:53:59.651662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.741 [2024-11-20 11:53:59.658080] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:26.741 [2024-11-20 11:53:59.658102] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:27.682 11:54:00 -- host/discovery.sh@128 -- # get_subsystem_names 00:26:27.682 11:54:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:27.682 11:54:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:27.682 11:54:00 -- host/discovery.sh@59 -- # sort 00:26:27.682 11:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.682 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.682 11:54:00 -- host/discovery.sh@59 -- # xargs 00:26:27.682 11:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.682 11:54:00 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.682 11:54:00 -- host/discovery.sh@129 -- # get_bdev_list 00:26:27.682 11:54:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.682 11:54:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.682 11:54:00 -- host/discovery.sh@55 -- # sort 00:26:27.682 11:54:00 -- host/discovery.sh@55 -- # xargs 00:26:27.682 11:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.682 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.682 11:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.682 11:54:00 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.682 11:54:00 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:26:27.682 11:54:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:27.682 11:54:00 -- host/discovery.sh@63 -- # sort -n 00:26:27.682 11:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.682 11:54:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:27.682 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.682 11:54:00 -- host/discovery.sh@63 -- # xargs 00:26:27.682 11:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.943 11:54:00 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:26:27.943 11:54:00 -- host/discovery.sh@131 -- # get_notification_count 00:26:27.943 11:54:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:27.943 11:54:00 -- host/discovery.sh@74 -- # jq '. | length' 00:26:27.943 11:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.943 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.943 11:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.943 11:54:00 -- host/discovery.sh@74 -- # notification_count=0 00:26:27.943 11:54:00 -- host/discovery.sh@75 -- # notify_id=2 00:26:27.943 11:54:00 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:26:27.943 11:54:00 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:27.943 11:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.943 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.943 11:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.943 11:54:00 -- host/discovery.sh@135 -- # sleep 1 00:26:28.882 11:54:01 -- host/discovery.sh@136 -- # get_subsystem_names 00:26:28.882 11:54:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.882 11:54:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.882 11:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.882 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.882 11:54:01 -- host/discovery.sh@59 -- # sort 00:26:28.882 11:54:01 -- host/discovery.sh@59 -- # xargs 00:26:28.882 11:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.882 11:54:01 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:26:28.882 11:54:01 -- host/discovery.sh@137 -- # get_bdev_list 00:26:28.882 11:54:01 -- host/discovery.sh@55 -- # sort 00:26:28.882 11:54:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.882 11:54:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.882 11:54:01 -- host/discovery.sh@55 -- # xargs 00:26:28.882 11:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.882 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.882 11:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.882 11:54:01 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:26:28.882 11:54:01 -- host/discovery.sh@138 -- # get_notification_count 00:26:28.882 11:54:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:28.882 11:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.882 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.882 11:54:01 -- host/discovery.sh@74 -- # jq '. | length' 00:26:29.142 11:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.142 11:54:01 -- host/discovery.sh@74 -- # notification_count=2 00:26:29.142 11:54:01 -- host/discovery.sh@75 -- # notify_id=4 00:26:29.142 11:54:01 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:26:29.142 11:54:01 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.142 11:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.142 11:54:01 -- common/autotest_common.sh@10 -- # set +x 00:26:30.080 [2024-11-20 11:54:02.976119] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.080 [2024-11-20 11:54:02.976144] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.080 [2024-11-20 11:54:02.976155] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.080 [2024-11-20 11:54:03.062024] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:30.080 [2024-11-20 11:54:03.120685] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:30.080 [2024-11-20 11:54:03.120721] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:30.341 11:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.341 11:54:03 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.341 11:54:03 -- common/autotest_common.sh@650 -- # local es=0 00:26:30.341 11:54:03 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.341 11:54:03 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.341 11:54:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.341 11:54:03 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.341 11:54:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.341 11:54:03 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.341 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.341 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.341 2024/11/20 11:54:03 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:30.341 request: 00:26:30.341 { 00:26:30.341 "method": "bdev_nvme_start_discovery", 00:26:30.341 "params": { 00:26:30.341 "name": "nvme", 00:26:30.341 "trtype": "tcp", 00:26:30.341 "traddr": "10.0.0.2", 00:26:30.341 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.341 "adrfam": "ipv4", 00:26:30.341 "trsvcid": "8009", 00:26:30.341 "wait_for_attach": true 00:26:30.341 } 00:26:30.341 } 00:26:30.341 Got JSON-RPC error response 00:26:30.341 GoRPCClient: error on JSON-RPC call 00:26:30.341 11:54:03 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.341 11:54:03 -- common/autotest_common.sh@653 -- # es=1 00:26:30.341 11:54:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.341 11:54:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.341 11:54:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.341 11:54:03 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:26:30.341 11:54:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.342 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.342 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # sort 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # xargs 00:26:30.342 11:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.342 11:54:03 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:26:30.342 11:54:03 -- host/discovery.sh@147 -- # get_bdev_list 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.342 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # sort 00:26:30.342 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # xargs 00:26:30.342 11:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.342 11:54:03 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:30.342 11:54:03 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.342 11:54:03 -- common/autotest_common.sh@650 -- # local es=0 00:26:30.342 11:54:03 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.342 11:54:03 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.342 11:54:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.342 11:54:03 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.342 11:54:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.342 11:54:03 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.342 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.342 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.342 2024/11/20 11:54:03 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:30.342 request: 00:26:30.342 { 00:26:30.342 "method": "bdev_nvme_start_discovery", 00:26:30.342 "params": { 00:26:30.342 "name": "nvme_second", 00:26:30.342 "trtype": "tcp", 00:26:30.342 "traddr": "10.0.0.2", 00:26:30.342 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.342 "adrfam": "ipv4", 00:26:30.342 "trsvcid": "8009", 00:26:30.342 "wait_for_attach": true 00:26:30.342 } 00:26:30.342 } 00:26:30.342 Got JSON-RPC error response 00:26:30.342 GoRPCClient: error on JSON-RPC call 00:26:30.342 11:54:03 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.342 11:54:03 -- common/autotest_common.sh@653 -- # es=1 00:26:30.342 11:54:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.342 11:54:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.342 11:54:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.342 11:54:03 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.342 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.342 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # sort 00:26:30.342 11:54:03 -- host/discovery.sh@67 -- # xargs 00:26:30.342 11:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.342 11:54:03 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:26:30.342 11:54:03 -- host/discovery.sh@153 -- # get_bdev_list 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.342 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.342 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # sort 00:26:30.342 11:54:03 -- host/discovery.sh@55 -- # xargs 00:26:30.342 11:54:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.342 11:54:03 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:30.342 11:54:03 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.342 11:54:03 -- common/autotest_common.sh@650 -- # local es=0 00:26:30.342 11:54:03 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.342 11:54:03 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.342 11:54:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.342 11:54:03 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.342 11:54:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.342 11:54:03 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.342 11:54:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.342 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:26:31.724 [2024-11-20 11:54:04.380652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.724 [2024-11-20 11:54:04.380712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.724 [2024-11-20 11:54:04.380722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd5970 with addr=10.0.0.2, port=8010 00:26:31.724 [2024-11-20 11:54:04.380738] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:31.724 [2024-11-20 11:54:04.380744] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:31.724 [2024-11-20 11:54:04.380751] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:32.666 [2024-11-20 11:54:05.378715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.666 [2024-11-20 11:54:05.378779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.666 [2024-11-20 11:54:05.378787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd5970 with addr=10.0.0.2, port=8010 00:26:32.666 [2024-11-20 11:54:05.378801] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:32.666 [2024-11-20 11:54:05.378806] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:32.666 [2024-11-20 11:54:05.378811] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:33.606 [2024-11-20 11:54:06.376718] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:33.606 2024/11/20 11:54:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:26:33.606 request: 00:26:33.606 { 00:26:33.606 "method": "bdev_nvme_start_discovery", 00:26:33.606 "params": { 00:26:33.606 "name": "nvme_second", 00:26:33.606 "trtype": "tcp", 00:26:33.606 "traddr": "10.0.0.2", 00:26:33.606 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:33.606 "adrfam": "ipv4", 00:26:33.606 "trsvcid": "8010", 00:26:33.606 "attach_timeout_ms": 3000 00:26:33.606 } 00:26:33.606 } 00:26:33.606 Got JSON-RPC error response 00:26:33.606 GoRPCClient: error on JSON-RPC call 00:26:33.606 11:54:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:33.606 11:54:06 -- common/autotest_common.sh@653 -- # es=1 00:26:33.606 11:54:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:33.606 11:54:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:33.606 11:54:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:33.606 11:54:06 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:26:33.606 11:54:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:33.606 11:54:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:33.606 11:54:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.606 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.606 11:54:06 -- host/discovery.sh@67 -- # sort 00:26:33.606 11:54:06 -- host/discovery.sh@67 -- # xargs 00:26:33.606 11:54:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.606 11:54:06 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:26:33.606 11:54:06 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:26:33.607 11:54:06 -- host/discovery.sh@162 -- # kill 85801 00:26:33.607 11:54:06 -- host/discovery.sh@163 -- # nvmftestfini 00:26:33.607 11:54:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:33.607 11:54:06 -- nvmf/common.sh@116 -- # sync 00:26:33.607 11:54:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:33.607 11:54:06 -- nvmf/common.sh@119 -- # set +e 00:26:33.607 11:54:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:33.607 11:54:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:33.607 rmmod nvme_tcp 00:26:33.607 rmmod nvme_fabrics 00:26:33.607 rmmod nvme_keyring 00:26:33.607 11:54:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:33.607 11:54:06 -- nvmf/common.sh@123 -- # set -e 00:26:33.607 11:54:06 -- nvmf/common.sh@124 -- # return 0 00:26:33.607 11:54:06 -- nvmf/common.sh@477 -- # '[' -n 85745 ']' 00:26:33.607 11:54:06 -- nvmf/common.sh@478 -- # killprocess 85745 00:26:33.607 11:54:06 -- common/autotest_common.sh@936 -- # '[' -z 85745 ']' 00:26:33.607 11:54:06 -- common/autotest_common.sh@940 -- # kill -0 85745 00:26:33.607 11:54:06 -- common/autotest_common.sh@941 -- # uname 00:26:33.607 11:54:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:33.607 11:54:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85745 00:26:33.607 killing process with pid 85745 00:26:33.607 11:54:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:33.607 11:54:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:33.607 11:54:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85745' 00:26:33.607 11:54:06 -- common/autotest_common.sh@955 -- # kill 85745 00:26:33.607 11:54:06 -- common/autotest_common.sh@960 -- # wait 85745 00:26:33.866 11:54:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:33.866 11:54:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:33.866 11:54:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:33.866 11:54:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.866 11:54:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:33.866 11:54:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.866 11:54:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.866 11:54:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.866 11:54:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:33.866 00:26:33.866 real 0m13.841s 00:26:33.866 user 0m26.714s 00:26:33.866 sys 0m1.834s 00:26:33.866 11:54:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:33.866 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.866 ************************************ 00:26:33.866 END TEST nvmf_discovery 00:26:33.866 ************************************ 00:26:34.128 11:54:06 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:34.129 11:54:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:34.129 11:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.129 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:26:34.129 ************************************ 00:26:34.129 START TEST nvmf_discovery_remove_ifc 00:26:34.129 ************************************ 00:26:34.129 11:54:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:34.129 * Looking for test storage... 00:26:34.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:34.129 11:54:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:34.129 11:54:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:34.129 11:54:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:34.129 11:54:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:34.129 11:54:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:34.129 11:54:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:34.129 11:54:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:34.129 11:54:07 -- scripts/common.sh@335 -- # IFS=.-: 00:26:34.129 11:54:07 -- scripts/common.sh@335 -- # read -ra ver1 00:26:34.129 11:54:07 -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.129 11:54:07 -- scripts/common.sh@336 -- # read -ra ver2 00:26:34.129 11:54:07 -- scripts/common.sh@337 -- # local 'op=<' 00:26:34.129 11:54:07 -- scripts/common.sh@339 -- # ver1_l=2 00:26:34.129 11:54:07 -- scripts/common.sh@340 -- # ver2_l=1 00:26:34.129 11:54:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:34.129 11:54:07 -- scripts/common.sh@343 -- # case "$op" in 00:26:34.129 11:54:07 -- scripts/common.sh@344 -- # : 1 00:26:34.129 11:54:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:34.129 11:54:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.129 11:54:07 -- scripts/common.sh@364 -- # decimal 1 00:26:34.129 11:54:07 -- scripts/common.sh@352 -- # local d=1 00:26:34.129 11:54:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.129 11:54:07 -- scripts/common.sh@354 -- # echo 1 00:26:34.129 11:54:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:34.129 11:54:07 -- scripts/common.sh@365 -- # decimal 2 00:26:34.129 11:54:07 -- scripts/common.sh@352 -- # local d=2 00:26:34.129 11:54:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.129 11:54:07 -- scripts/common.sh@354 -- # echo 2 00:26:34.129 11:54:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:34.129 11:54:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:34.129 11:54:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:34.129 11:54:07 -- scripts/common.sh@367 -- # return 0 00:26:34.129 11:54:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.129 11:54:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.129 --rc genhtml_branch_coverage=1 00:26:34.129 --rc genhtml_function_coverage=1 00:26:34.129 --rc genhtml_legend=1 00:26:34.129 --rc geninfo_all_blocks=1 00:26:34.129 --rc geninfo_unexecuted_blocks=1 00:26:34.129 00:26:34.129 ' 00:26:34.129 11:54:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.129 --rc genhtml_branch_coverage=1 00:26:34.129 --rc genhtml_function_coverage=1 00:26:34.129 --rc genhtml_legend=1 00:26:34.129 --rc geninfo_all_blocks=1 00:26:34.129 --rc geninfo_unexecuted_blocks=1 00:26:34.129 00:26:34.129 ' 00:26:34.129 11:54:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.129 --rc genhtml_branch_coverage=1 00:26:34.129 --rc genhtml_function_coverage=1 00:26:34.129 --rc genhtml_legend=1 00:26:34.129 --rc geninfo_all_blocks=1 00:26:34.129 --rc geninfo_unexecuted_blocks=1 00:26:34.129 00:26:34.129 ' 00:26:34.129 11:54:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.129 --rc genhtml_branch_coverage=1 00:26:34.129 --rc genhtml_function_coverage=1 00:26:34.129 --rc genhtml_legend=1 00:26:34.129 --rc geninfo_all_blocks=1 00:26:34.129 --rc geninfo_unexecuted_blocks=1 00:26:34.129 00:26:34.129 ' 00:26:34.129 11:54:07 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:34.129 11:54:07 -- nvmf/common.sh@7 -- # uname -s 00:26:34.129 11:54:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.129 11:54:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.129 11:54:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.129 11:54:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.129 11:54:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.129 11:54:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.129 11:54:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.129 11:54:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.129 11:54:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.129 11:54:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.389 11:54:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:26:34.389 11:54:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:26:34.389 11:54:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.389 11:54:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.389 11:54:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:34.389 11:54:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:34.389 11:54:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.389 11:54:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.389 11:54:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.389 11:54:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.389 11:54:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.389 11:54:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.389 11:54:07 -- paths/export.sh@5 -- # export PATH 00:26:34.389 11:54:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.389 11:54:07 -- nvmf/common.sh@46 -- # : 0 00:26:34.389 11:54:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:34.389 11:54:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:34.389 11:54:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:34.389 11:54:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.389 11:54:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.389 11:54:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:34.389 11:54:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:34.389 11:54:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:34.389 11:54:07 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:34.389 11:54:07 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:34.389 11:54:07 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:34.389 11:54:07 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:34.390 11:54:07 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:34.390 11:54:07 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:34.390 11:54:07 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:34.390 11:54:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:34.390 11:54:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.390 11:54:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:34.390 11:54:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:34.390 11:54:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:34.390 11:54:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.390 11:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.390 11:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.390 11:54:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:34.390 11:54:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:34.390 11:54:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:34.390 11:54:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:34.390 11:54:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:34.390 11:54:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:34.390 11:54:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.390 11:54:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.390 11:54:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:34.390 11:54:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:34.390 11:54:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:34.390 11:54:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:34.390 11:54:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:34.390 11:54:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.390 11:54:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:34.390 11:54:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:34.390 11:54:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:34.390 11:54:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:34.390 11:54:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:34.390 11:54:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:34.390 Cannot find device "nvmf_tgt_br" 00:26:34.390 11:54:07 -- nvmf/common.sh@154 -- # true 00:26:34.390 11:54:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:34.390 Cannot find device "nvmf_tgt_br2" 00:26:34.390 11:54:07 -- nvmf/common.sh@155 -- # true 00:26:34.390 11:54:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:34.390 11:54:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:34.390 Cannot find device "nvmf_tgt_br" 00:26:34.390 11:54:07 -- nvmf/common.sh@157 -- # true 00:26:34.390 11:54:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:34.390 Cannot find device "nvmf_tgt_br2" 00:26:34.390 11:54:07 -- nvmf/common.sh@158 -- # true 00:26:34.390 11:54:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:34.390 11:54:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:34.390 11:54:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:34.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.390 11:54:07 -- nvmf/common.sh@161 -- # true 00:26:34.390 11:54:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:34.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.390 11:54:07 -- nvmf/common.sh@162 -- # true 00:26:34.390 11:54:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:34.390 11:54:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:34.390 11:54:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:34.390 11:54:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:34.390 11:54:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:34.390 11:54:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:34.650 11:54:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:34.650 11:54:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:34.650 11:54:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:34.650 11:54:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:34.650 11:54:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:34.650 11:54:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:34.650 11:54:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:34.650 11:54:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:34.650 11:54:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:34.650 11:54:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:34.650 11:54:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:34.650 11:54:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:34.650 11:54:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:34.650 11:54:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:34.650 11:54:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:34.650 11:54:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:34.650 11:54:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:34.650 11:54:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:34.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:26:34.650 00:26:34.651 --- 10.0.0.2 ping statistics --- 00:26:34.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.651 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:34.651 11:54:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:34.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:34.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:26:34.651 00:26:34.651 --- 10.0.0.3 ping statistics --- 00:26:34.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.651 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:34.651 11:54:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:34.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:26:34.651 00:26:34.651 --- 10.0.0.1 ping statistics --- 00:26:34.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.651 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:34.651 11:54:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.651 11:54:07 -- nvmf/common.sh@421 -- # return 0 00:26:34.651 11:54:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:34.651 11:54:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.651 11:54:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:34.651 11:54:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:34.651 11:54:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.651 11:54:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:34.651 11:54:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:34.651 11:54:07 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:34.651 11:54:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:34.651 11:54:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:34.651 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.651 11:54:07 -- nvmf/common.sh@469 -- # nvmfpid=86310 00:26:34.651 11:54:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:34.651 11:54:07 -- nvmf/common.sh@470 -- # waitforlisten 86310 00:26:34.651 11:54:07 -- common/autotest_common.sh@829 -- # '[' -z 86310 ']' 00:26:34.651 11:54:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.651 11:54:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.651 11:54:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.651 11:54:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.651 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.651 [2024-11-20 11:54:07.640193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:34.651 [2024-11-20 11:54:07.640255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.911 [2024-11-20 11:54:07.768335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.911 [2024-11-20 11:54:07.845678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:34.911 [2024-11-20 11:54:07.845809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.911 [2024-11-20 11:54:07.845816] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.911 [2024-11-20 11:54:07.845821] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.911 [2024-11-20 11:54:07.845839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.482 11:54:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.482 11:54:08 -- common/autotest_common.sh@862 -- # return 0 00:26:35.482 11:54:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:35.482 11:54:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:35.482 11:54:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.742 11:54:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.742 11:54:08 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:35.742 11:54:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.742 11:54:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.742 [2024-11-20 11:54:08.556821] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.742 [2024-11-20 11:54:08.564917] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:35.742 null0 00:26:35.742 [2024-11-20 11:54:08.596807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.742 11:54:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.742 11:54:08 -- host/discovery_remove_ifc.sh@59 -- # hostpid=86360 00:26:35.742 11:54:08 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:35.742 11:54:08 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86360 /tmp/host.sock 00:26:35.742 11:54:08 -- common/autotest_common.sh@829 -- # '[' -z 86360 ']' 00:26:35.742 11:54:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:35.742 11:54:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.742 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:35.742 11:54:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:35.742 11:54:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.742 11:54:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.742 [2024-11-20 11:54:08.672105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:35.742 [2024-11-20 11:54:08.672161] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86360 ] 00:26:36.007 [2024-11-20 11:54:08.808606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.007 [2024-11-20 11:54:08.886737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:36.007 [2024-11-20 11:54:08.886861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.584 11:54:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.584 11:54:09 -- common/autotest_common.sh@862 -- # return 0 00:26:36.584 11:54:09 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:36.584 11:54:09 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:36.584 11:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.584 11:54:09 -- common/autotest_common.sh@10 -- # set +x 00:26:36.584 11:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.584 11:54:09 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:36.584 11:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.584 11:54:09 -- common/autotest_common.sh@10 -- # set +x 00:26:36.584 11:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.584 11:54:09 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:36.584 11:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.584 11:54:09 -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 [2024-11-20 11:54:10.625311] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:37.966 [2024-11-20 11:54:10.625339] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:37.966 [2024-11-20 11:54:10.625351] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:37.966 [2024-11-20 11:54:10.711223] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:37.966 [2024-11-20 11:54:10.766259] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:37.966 [2024-11-20 11:54:10.766302] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:37.966 [2024-11-20 11:54:10.766322] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:37.966 [2024-11-20 11:54:10.766334] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:37.966 [2024-11-20 11:54:10.766352] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:37.966 11:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.966 [2024-11-20 11:54:10.773779] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb5f840 was disconnected and freed. delete nvme_qpair. 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.966 11:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.966 11:54:10 -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.966 11:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.966 11:54:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.966 11:54:10 -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.966 11:54:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.966 11:54:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.905 11:54:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.905 11:54:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.905 11:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.905 11:54:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.905 11:54:11 -- common/autotest_common.sh@10 -- # set +x 00:26:38.905 11:54:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.905 11:54:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.905 11:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.164 11:54:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.164 11:54:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.104 11:54:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.104 11:54:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.104 11:54:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.104 11:54:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.104 11:54:12 -- common/autotest_common.sh@10 -- # set +x 00:26:40.104 11:54:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.104 11:54:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.104 11:54:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.104 11:54:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.104 11:54:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.057 11:54:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.057 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.057 11:54:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.057 11:54:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.439 11:54:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.439 11:54:15 -- common/autotest_common.sh@10 -- # set +x 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.439 11:54:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.439 11:54:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.379 11:54:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.379 11:54:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.379 11:54:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.379 11:54:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.379 11:54:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.379 11:54:16 -- common/autotest_common.sh@10 -- # set +x 00:26:43.379 11:54:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.380 11:54:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.380 11:54:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.380 11:54:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.380 [2024-11-20 11:54:16.184213] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:43.380 [2024-11-20 11:54:16.184263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.380 [2024-11-20 11:54:16.184273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 11:54:16.184283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.380 [2024-11-20 11:54:16.184289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 11:54:16.184295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.380 [2024-11-20 11:54:16.184300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 11:54:16.184306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.380 [2024-11-20 11:54:16.184311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 11:54:16.184318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.380 [2024-11-20 11:54:16.184323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 11:54:16.184328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad69f0 is same with the state(5) to be set 00:26:43.380 [2024-11-20 11:54:16.194191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad69f0 (9): Bad file descriptor 00:26:43.380 [2024-11-20 11:54:16.204187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:44.320 11:54:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.320 11:54:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.320 11:54:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.320 11:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.320 11:54:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.320 11:54:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.320 11:54:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.320 [2024-11-20 11:54:17.234678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:45.260 [2024-11-20 11:54:18.258724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:45.260 [2024-11-20 11:54:18.258857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad69f0 with addr=10.0.0.2, port=4420 00:26:45.260 [2024-11-20 11:54:18.258895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad69f0 is same with the state(5) to be set 00:26:45.260 [2024-11-20 11:54:18.258957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.260 [2024-11-20 11:54:18.258981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.260 [2024-11-20 11:54:18.259001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.260 [2024-11-20 11:54:18.259022] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:45.260 [2024-11-20 11:54:18.260157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad69f0 (9): Bad file descriptor 00:26:45.260 [2024-11-20 11:54:18.260241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.260 [2024-11-20 11:54:18.260301] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:45.260 [2024-11-20 11:54:18.260393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.260 [2024-11-20 11:54:18.260428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.260 [2024-11-20 11:54:18.260456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.260 [2024-11-20 11:54:18.260479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.260 [2024-11-20 11:54:18.260503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.260 [2024-11-20 11:54:18.260524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.260 [2024-11-20 11:54:18.260547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.260 [2024-11-20 11:54:18.260569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.260 [2024-11-20 11:54:18.260593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.260 [2024-11-20 11:54:18.260615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.260 [2024-11-20 11:54:18.260636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:45.260 [2024-11-20 11:54:18.260696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad6e00 (9): Bad file descriptor 00:26:45.260 [2024-11-20 11:54:18.261272] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:45.260 [2024-11-20 11:54:18.261317] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:45.260 11:54:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.260 11:54:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.260 11:54:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.643 11:54:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.643 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.643 11:54:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.643 11:54:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.643 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.643 11:54:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:46.643 11:54:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.582 [2024-11-20 11:54:20.267897] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.583 [2024-11-20 11:54:20.267918] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.583 [2024-11-20 11:54:20.267931] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.583 [2024-11-20 11:54:20.353805] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:47.583 [2024-11-20 11:54:20.408304] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:47.583 [2024-11-20 11:54:20.408344] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:47.583 [2024-11-20 11:54:20.408360] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:47.583 [2024-11-20 11:54:20.408375] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:47.583 [2024-11-20 11:54:20.408381] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:47.583 [2024-11-20 11:54:20.416381] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb1a080 was disconnected and freed. delete nvme_qpair. 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.583 11:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.583 11:54:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.583 11:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:47.583 11:54:20 -- host/discovery_remove_ifc.sh@90 -- # killprocess 86360 00:26:47.583 11:54:20 -- common/autotest_common.sh@936 -- # '[' -z 86360 ']' 00:26:47.583 11:54:20 -- common/autotest_common.sh@940 -- # kill -0 86360 00:26:47.583 11:54:20 -- common/autotest_common.sh@941 -- # uname 00:26:47.583 11:54:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:47.583 11:54:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86360 00:26:47.583 killing process with pid 86360 00:26:47.583 11:54:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:47.583 11:54:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:47.583 11:54:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86360' 00:26:47.583 11:54:20 -- common/autotest_common.sh@955 -- # kill 86360 00:26:47.583 11:54:20 -- common/autotest_common.sh@960 -- # wait 86360 00:26:47.842 11:54:20 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:47.842 11:54:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:47.843 11:54:20 -- nvmf/common.sh@116 -- # sync 00:26:47.843 11:54:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:47.843 11:54:20 -- nvmf/common.sh@119 -- # set +e 00:26:47.843 11:54:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:47.843 11:54:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:47.843 rmmod nvme_tcp 00:26:47.843 rmmod nvme_fabrics 00:26:47.843 rmmod nvme_keyring 00:26:47.843 11:54:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:47.843 11:54:20 -- nvmf/common.sh@123 -- # set -e 00:26:47.843 11:54:20 -- nvmf/common.sh@124 -- # return 0 00:26:47.843 11:54:20 -- nvmf/common.sh@477 -- # '[' -n 86310 ']' 00:26:47.843 11:54:20 -- nvmf/common.sh@478 -- # killprocess 86310 00:26:47.843 11:54:20 -- common/autotest_common.sh@936 -- # '[' -z 86310 ']' 00:26:47.843 11:54:20 -- common/autotest_common.sh@940 -- # kill -0 86310 00:26:47.843 11:54:20 -- common/autotest_common.sh@941 -- # uname 00:26:47.843 11:54:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:47.843 11:54:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86310 00:26:48.102 killing process with pid 86310 00:26:48.102 11:54:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:48.102 11:54:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:48.102 11:54:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86310' 00:26:48.102 11:54:20 -- common/autotest_common.sh@955 -- # kill 86310 00:26:48.102 11:54:20 -- common/autotest_common.sh@960 -- # wait 86310 00:26:48.102 11:54:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:48.102 11:54:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:48.102 11:54:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:48.102 11:54:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.102 11:54:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:48.102 11:54:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.102 11:54:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.102 11:54:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.363 11:54:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:48.363 00:26:48.363 real 0m14.233s 00:26:48.363 user 0m24.184s 00:26:48.363 sys 0m1.643s 00:26:48.363 11:54:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:48.363 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.363 ************************************ 00:26:48.363 END TEST nvmf_discovery_remove_ifc 00:26:48.363 ************************************ 00:26:48.363 11:54:21 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:26:48.363 11:54:21 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:48.363 11:54:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:48.363 11:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:48.363 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.363 ************************************ 00:26:48.363 START TEST nvmf_digest 00:26:48.363 ************************************ 00:26:48.363 11:54:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:48.363 * Looking for test storage... 00:26:48.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:48.363 11:54:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:48.363 11:54:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:48.363 11:54:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:48.624 11:54:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:48.624 11:54:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:48.624 11:54:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:48.624 11:54:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:48.624 11:54:21 -- scripts/common.sh@335 -- # IFS=.-: 00:26:48.624 11:54:21 -- scripts/common.sh@335 -- # read -ra ver1 00:26:48.624 11:54:21 -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.624 11:54:21 -- scripts/common.sh@336 -- # read -ra ver2 00:26:48.624 11:54:21 -- scripts/common.sh@337 -- # local 'op=<' 00:26:48.624 11:54:21 -- scripts/common.sh@339 -- # ver1_l=2 00:26:48.624 11:54:21 -- scripts/common.sh@340 -- # ver2_l=1 00:26:48.624 11:54:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:48.624 11:54:21 -- scripts/common.sh@343 -- # case "$op" in 00:26:48.624 11:54:21 -- scripts/common.sh@344 -- # : 1 00:26:48.624 11:54:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:48.624 11:54:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.624 11:54:21 -- scripts/common.sh@364 -- # decimal 1 00:26:48.624 11:54:21 -- scripts/common.sh@352 -- # local d=1 00:26:48.624 11:54:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.624 11:54:21 -- scripts/common.sh@354 -- # echo 1 00:26:48.624 11:54:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:48.624 11:54:21 -- scripts/common.sh@365 -- # decimal 2 00:26:48.624 11:54:21 -- scripts/common.sh@352 -- # local d=2 00:26:48.624 11:54:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.624 11:54:21 -- scripts/common.sh@354 -- # echo 2 00:26:48.624 11:54:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:48.624 11:54:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:48.624 11:54:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:48.624 11:54:21 -- scripts/common.sh@367 -- # return 0 00:26:48.624 11:54:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.624 11:54:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.625 --rc genhtml_branch_coverage=1 00:26:48.625 --rc genhtml_function_coverage=1 00:26:48.625 --rc genhtml_legend=1 00:26:48.625 --rc geninfo_all_blocks=1 00:26:48.625 --rc geninfo_unexecuted_blocks=1 00:26:48.625 00:26:48.625 ' 00:26:48.625 11:54:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.625 --rc genhtml_branch_coverage=1 00:26:48.625 --rc genhtml_function_coverage=1 00:26:48.625 --rc genhtml_legend=1 00:26:48.625 --rc geninfo_all_blocks=1 00:26:48.625 --rc geninfo_unexecuted_blocks=1 00:26:48.625 00:26:48.625 ' 00:26:48.625 11:54:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.625 --rc genhtml_branch_coverage=1 00:26:48.625 --rc genhtml_function_coverage=1 00:26:48.625 --rc genhtml_legend=1 00:26:48.625 --rc geninfo_all_blocks=1 00:26:48.625 --rc geninfo_unexecuted_blocks=1 00:26:48.625 00:26:48.625 ' 00:26:48.625 11:54:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.625 --rc genhtml_branch_coverage=1 00:26:48.625 --rc genhtml_function_coverage=1 00:26:48.625 --rc genhtml_legend=1 00:26:48.625 --rc geninfo_all_blocks=1 00:26:48.625 --rc geninfo_unexecuted_blocks=1 00:26:48.625 00:26:48.625 ' 00:26:48.625 11:54:21 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:48.625 11:54:21 -- nvmf/common.sh@7 -- # uname -s 00:26:48.625 11:54:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.625 11:54:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.625 11:54:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.625 11:54:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.625 11:54:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.625 11:54:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.625 11:54:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.625 11:54:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.625 11:54:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.625 11:54:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.625 11:54:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:26:48.625 11:54:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:26:48.625 11:54:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.625 11:54:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.625 11:54:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:48.625 11:54:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:48.625 11:54:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.625 11:54:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.625 11:54:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.625 11:54:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.625 11:54:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.625 11:54:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.625 11:54:21 -- paths/export.sh@5 -- # export PATH 00:26:48.625 11:54:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.625 11:54:21 -- nvmf/common.sh@46 -- # : 0 00:26:48.625 11:54:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:48.625 11:54:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:48.625 11:54:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:48.625 11:54:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.625 11:54:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.625 11:54:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:48.625 11:54:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:48.625 11:54:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:48.625 11:54:21 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:48.625 11:54:21 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:48.625 11:54:21 -- host/digest.sh@16 -- # runtime=2 00:26:48.625 11:54:21 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:26:48.625 11:54:21 -- host/digest.sh@132 -- # nvmftestinit 00:26:48.625 11:54:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:48.625 11:54:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.625 11:54:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:48.625 11:54:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:48.625 11:54:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:48.625 11:54:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.625 11:54:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.625 11:54:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.625 11:54:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:48.625 11:54:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:48.625 11:54:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:48.625 11:54:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:48.625 11:54:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:48.625 11:54:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:48.625 11:54:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.625 11:54:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.625 11:54:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:48.625 11:54:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:48.625 11:54:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:48.625 11:54:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:48.625 11:54:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:48.625 11:54:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.625 11:54:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:48.625 11:54:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:48.625 11:54:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:48.625 11:54:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:48.625 11:54:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:48.625 11:54:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:48.625 Cannot find device "nvmf_tgt_br" 00:26:48.625 11:54:21 -- nvmf/common.sh@154 -- # true 00:26:48.625 11:54:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:48.625 Cannot find device "nvmf_tgt_br2" 00:26:48.625 11:54:21 -- nvmf/common.sh@155 -- # true 00:26:48.625 11:54:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:48.625 11:54:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:48.625 Cannot find device "nvmf_tgt_br" 00:26:48.625 11:54:21 -- nvmf/common.sh@157 -- # true 00:26:48.625 11:54:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:48.625 Cannot find device "nvmf_tgt_br2" 00:26:48.625 11:54:21 -- nvmf/common.sh@158 -- # true 00:26:48.625 11:54:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:48.625 11:54:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:48.885 11:54:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:48.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.885 11:54:21 -- nvmf/common.sh@161 -- # true 00:26:48.885 11:54:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:48.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.885 11:54:21 -- nvmf/common.sh@162 -- # true 00:26:48.885 11:54:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:48.885 11:54:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:48.885 11:54:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:48.885 11:54:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:48.885 11:54:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:48.885 11:54:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:48.885 11:54:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:48.885 11:54:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:48.886 11:54:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:48.886 11:54:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:48.886 11:54:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:48.886 11:54:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:48.886 11:54:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:48.886 11:54:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:48.886 11:54:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:48.886 11:54:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:48.886 11:54:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:48.886 11:54:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:48.886 11:54:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:48.886 11:54:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:48.886 11:54:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:48.886 11:54:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:48.886 11:54:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:48.886 11:54:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:48.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:26:48.886 00:26:48.886 --- 10.0.0.2 ping statistics --- 00:26:48.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.886 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:48.886 11:54:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:48.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:48.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:26:48.886 00:26:48.886 --- 10.0.0.3 ping statistics --- 00:26:48.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.886 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:48.886 11:54:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:48.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:26:48.886 00:26:48.886 --- 10.0.0.1 ping statistics --- 00:26:48.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.886 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:48.886 11:54:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.886 11:54:21 -- nvmf/common.sh@421 -- # return 0 00:26:48.886 11:54:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:48.886 11:54:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.886 11:54:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:48.886 11:54:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:48.886 11:54:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.886 11:54:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:48.886 11:54:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:48.886 11:54:21 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.886 11:54:21 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:26:48.886 11:54:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:48.886 11:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:48.886 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.886 ************************************ 00:26:48.886 START TEST nvmf_digest_clean 00:26:48.886 ************************************ 00:26:48.886 11:54:21 -- common/autotest_common.sh@1114 -- # run_digest 00:26:48.886 11:54:21 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:26:48.886 11:54:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:48.886 11:54:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:48.886 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.886 11:54:21 -- nvmf/common.sh@469 -- # nvmfpid=86779 00:26:48.886 11:54:21 -- nvmf/common.sh@470 -- # waitforlisten 86779 00:26:48.886 11:54:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.886 11:54:21 -- common/autotest_common.sh@829 -- # '[' -z 86779 ']' 00:26:48.886 11:54:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.886 11:54:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.886 11:54:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.886 11:54:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.886 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:26:49.146 [2024-11-20 11:54:21.933874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:49.146 [2024-11-20 11:54:21.933925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.146 [2024-11-20 11:54:22.062194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.146 [2024-11-20 11:54:22.146098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:49.146 [2024-11-20 11:54:22.146219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.146 [2024-11-20 11:54:22.146226] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.146 [2024-11-20 11:54:22.146230] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.146 [2024-11-20 11:54:22.146251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.086 11:54:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.086 11:54:22 -- common/autotest_common.sh@862 -- # return 0 00:26:50.086 11:54:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:50.086 11:54:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:50.086 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:26:50.086 11:54:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.086 11:54:22 -- host/digest.sh@120 -- # common_target_config 00:26:50.086 11:54:22 -- host/digest.sh@43 -- # rpc_cmd 00:26:50.086 11:54:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.086 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:26:50.086 null0 00:26:50.086 [2024-11-20 11:54:22.909603] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.086 [2024-11-20 11:54:22.933651] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.086 11:54:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.086 11:54:22 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:26:50.086 11:54:22 -- host/digest.sh@77 -- # local rw bs qd 00:26:50.086 11:54:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:50.086 11:54:22 -- host/digest.sh@80 -- # rw=randread 00:26:50.086 11:54:22 -- host/digest.sh@80 -- # bs=4096 00:26:50.086 11:54:22 -- host/digest.sh@80 -- # qd=128 00:26:50.086 11:54:22 -- host/digest.sh@82 -- # bperfpid=86829 00:26:50.086 11:54:22 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:50.086 11:54:22 -- host/digest.sh@83 -- # waitforlisten 86829 /var/tmp/bperf.sock 00:26:50.086 11:54:22 -- common/autotest_common.sh@829 -- # '[' -z 86829 ']' 00:26:50.086 11:54:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.086 11:54:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.086 11:54:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.086 11:54:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.086 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:26:50.086 [2024-11-20 11:54:22.993922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:50.086 [2024-11-20 11:54:22.994027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86829 ] 00:26:50.346 [2024-11-20 11:54:23.132433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.346 [2024-11-20 11:54:23.210999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.915 11:54:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.915 11:54:23 -- common/autotest_common.sh@862 -- # return 0 00:26:50.915 11:54:23 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:50.915 11:54:23 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:50.915 11:54:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:51.175 11:54:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.175 11:54:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.444 nvme0n1 00:26:51.444 11:54:24 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:51.444 11:54:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.444 Running I/O for 2 seconds... 00:26:54.000 00:26:54.000 Latency(us) 00:26:54.000 [2024-11-20T11:54:27.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.000 [2024-11-20T11:54:27.043Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:54.000 nvme0n1 : 2.00 27415.43 107.09 0.00 0.00 4665.36 1960.36 9901.95 00:26:54.000 [2024-11-20T11:54:27.043Z] =================================================================================================================== 00:26:54.000 [2024-11-20T11:54:27.043Z] Total : 27415.43 107.09 0.00 0.00 4665.36 1960.36 9901.95 00:26:54.000 0 00:26:54.000 11:54:26 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:54.000 11:54:26 -- host/digest.sh@92 -- # get_accel_stats 00:26:54.000 11:54:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.000 11:54:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.000 11:54:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.000 | select(.opcode=="crc32c") 00:26:54.000 | "\(.module_name) \(.executed)"' 00:26:54.000 11:54:26 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:54.000 11:54:26 -- host/digest.sh@93 -- # exp_module=software 00:26:54.000 11:54:26 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:54.000 11:54:26 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.000 11:54:26 -- host/digest.sh@97 -- # killprocess 86829 00:26:54.000 11:54:26 -- common/autotest_common.sh@936 -- # '[' -z 86829 ']' 00:26:54.000 11:54:26 -- common/autotest_common.sh@940 -- # kill -0 86829 00:26:54.000 11:54:26 -- common/autotest_common.sh@941 -- # uname 00:26:54.000 11:54:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.000 11:54:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86829 00:26:54.000 killing process with pid 86829 00:26:54.000 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.000 00:26:54.000 Latency(us) 00:26:54.000 [2024-11-20T11:54:27.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.000 [2024-11-20T11:54:27.043Z] =================================================================================================================== 00:26:54.000 [2024-11-20T11:54:27.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.000 11:54:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:54.000 11:54:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:54.000 11:54:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86829' 00:26:54.000 11:54:26 -- common/autotest_common.sh@955 -- # kill 86829 00:26:54.000 11:54:26 -- common/autotest_common.sh@960 -- # wait 86829 00:26:54.000 11:54:26 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:26:54.000 11:54:26 -- host/digest.sh@77 -- # local rw bs qd 00:26:54.000 11:54:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:54.000 11:54:26 -- host/digest.sh@80 -- # rw=randread 00:26:54.000 11:54:26 -- host/digest.sh@80 -- # bs=131072 00:26:54.000 11:54:26 -- host/digest.sh@80 -- # qd=16 00:26:54.000 11:54:26 -- host/digest.sh@82 -- # bperfpid=86914 00:26:54.000 11:54:26 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:54.000 11:54:26 -- host/digest.sh@83 -- # waitforlisten 86914 /var/tmp/bperf.sock 00:26:54.000 11:54:26 -- common/autotest_common.sh@829 -- # '[' -z 86914 ']' 00:26:54.000 11:54:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.000 11:54:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.000 11:54:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.000 11:54:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.000 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:26:54.000 [2024-11-20 11:54:26.949596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.000 [2024-11-20 11:54:26.949781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:26:54.000 Zero copy mechanism will not be used. 00:26:54.000 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86914 ] 00:26:54.260 [2024-11-20 11:54:27.068915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.260 [2024-11-20 11:54:27.155528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.829 11:54:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.829 11:54:27 -- common/autotest_common.sh@862 -- # return 0 00:26:54.829 11:54:27 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:54.829 11:54:27 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:54.829 11:54:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.088 11:54:28 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.088 11:54:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.348 nvme0n1 00:26:55.348 11:54:28 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:55.348 11:54:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:55.348 Zero copy mechanism will not be used. 00:26:55.348 Running I/O for 2 seconds... 00:26:57.906 00:26:57.906 Latency(us) 00:26:57.906 [2024-11-20T11:54:30.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.906 [2024-11-20T11:54:30.949Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:57.906 nvme0n1 : 2.00 10817.12 1352.14 0.00 0.00 1476.93 744.08 9730.24 00:26:57.906 [2024-11-20T11:54:30.949Z] =================================================================================================================== 00:26:57.906 [2024-11-20T11:54:30.949Z] Total : 10817.12 1352.14 0.00 0.00 1476.93 744.08 9730.24 00:26:57.906 0 00:26:57.906 11:54:30 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:57.906 11:54:30 -- host/digest.sh@92 -- # get_accel_stats 00:26:57.906 11:54:30 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:57.906 11:54:30 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:57.906 | select(.opcode=="crc32c") 00:26:57.906 | "\(.module_name) \(.executed)"' 00:26:57.906 11:54:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:57.906 11:54:30 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:57.906 11:54:30 -- host/digest.sh@93 -- # exp_module=software 00:26:57.906 11:54:30 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:57.906 11:54:30 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:57.906 11:54:30 -- host/digest.sh@97 -- # killprocess 86914 00:26:57.906 11:54:30 -- common/autotest_common.sh@936 -- # '[' -z 86914 ']' 00:26:57.906 11:54:30 -- common/autotest_common.sh@940 -- # kill -0 86914 00:26:57.906 11:54:30 -- common/autotest_common.sh@941 -- # uname 00:26:57.906 11:54:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:57.906 11:54:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86914 00:26:57.906 killing process with pid 86914 00:26:57.906 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.906 00:26:57.906 Latency(us) 00:26:57.906 [2024-11-20T11:54:30.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.906 [2024-11-20T11:54:30.949Z] =================================================================================================================== 00:26:57.906 [2024-11-20T11:54:30.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.906 11:54:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:57.906 11:54:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:57.906 11:54:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86914' 00:26:57.906 11:54:30 -- common/autotest_common.sh@955 -- # kill 86914 00:26:57.906 11:54:30 -- common/autotest_common.sh@960 -- # wait 86914 00:26:57.906 11:54:30 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:26:57.906 11:54:30 -- host/digest.sh@77 -- # local rw bs qd 00:26:57.906 11:54:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:57.906 11:54:30 -- host/digest.sh@80 -- # rw=randwrite 00:26:57.906 11:54:30 -- host/digest.sh@80 -- # bs=4096 00:26:57.906 11:54:30 -- host/digest.sh@80 -- # qd=128 00:26:57.906 11:54:30 -- host/digest.sh@82 -- # bperfpid=87004 00:26:57.906 11:54:30 -- host/digest.sh@83 -- # waitforlisten 87004 /var/tmp/bperf.sock 00:26:57.906 11:54:30 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:57.906 11:54:30 -- common/autotest_common.sh@829 -- # '[' -z 87004 ']' 00:26:57.906 11:54:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.906 11:54:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.906 11:54:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.906 11:54:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.906 11:54:30 -- common/autotest_common.sh@10 -- # set +x 00:26:57.906 [2024-11-20 11:54:30.893061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:57.906 [2024-11-20 11:54:30.893582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87004 ] 00:26:58.166 [2024-11-20 11:54:31.015296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.166 [2024-11-20 11:54:31.096206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.735 11:54:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.735 11:54:31 -- common/autotest_common.sh@862 -- # return 0 00:26:58.735 11:54:31 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:58.735 11:54:31 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:58.735 11:54:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.995 11:54:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.995 11:54:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.255 nvme0n1 00:26:59.255 11:54:32 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:59.255 11:54:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.515 Running I/O for 2 seconds... 00:27:01.465 00:27:01.465 Latency(us) 00:27:01.465 [2024-11-20T11:54:34.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.465 [2024-11-20T11:54:34.508Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:01.465 nvme0n1 : 2.00 32549.81 127.15 0.00 0.00 3928.59 2303.78 10531.55 00:27:01.465 [2024-11-20T11:54:34.508Z] =================================================================================================================== 00:27:01.465 [2024-11-20T11:54:34.508Z] Total : 32549.81 127.15 0.00 0.00 3928.59 2303.78 10531.55 00:27:01.465 0 00:27:01.465 11:54:34 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:27:01.465 11:54:34 -- host/digest.sh@92 -- # get_accel_stats 00:27:01.465 11:54:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.465 11:54:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.465 | select(.opcode=="crc32c") 00:27:01.465 | "\(.module_name) \(.executed)"' 00:27:01.465 11:54:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.725 11:54:34 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:27:01.725 11:54:34 -- host/digest.sh@93 -- # exp_module=software 00:27:01.725 11:54:34 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:27:01.725 11:54:34 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.725 11:54:34 -- host/digest.sh@97 -- # killprocess 87004 00:27:01.725 11:54:34 -- common/autotest_common.sh@936 -- # '[' -z 87004 ']' 00:27:01.725 11:54:34 -- common/autotest_common.sh@940 -- # kill -0 87004 00:27:01.725 11:54:34 -- common/autotest_common.sh@941 -- # uname 00:27:01.725 11:54:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:01.725 11:54:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87004 00:27:01.725 11:54:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:01.725 killing process with pid 87004 00:27:01.725 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.725 00:27:01.725 Latency(us) 00:27:01.725 [2024-11-20T11:54:34.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.725 [2024-11-20T11:54:34.768Z] =================================================================================================================== 00:27:01.725 [2024-11-20T11:54:34.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.725 11:54:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:01.725 11:54:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87004' 00:27:01.725 11:54:34 -- common/autotest_common.sh@955 -- # kill 87004 00:27:01.725 11:54:34 -- common/autotest_common.sh@960 -- # wait 87004 00:27:01.985 11:54:34 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:27:01.985 11:54:34 -- host/digest.sh@77 -- # local rw bs qd 00:27:01.985 11:54:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:01.985 11:54:34 -- host/digest.sh@80 -- # rw=randwrite 00:27:01.985 11:54:34 -- host/digest.sh@80 -- # bs=131072 00:27:01.985 11:54:34 -- host/digest.sh@80 -- # qd=16 00:27:01.985 11:54:34 -- host/digest.sh@82 -- # bperfpid=87089 00:27:01.985 11:54:34 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:01.985 11:54:34 -- host/digest.sh@83 -- # waitforlisten 87089 /var/tmp/bperf.sock 00:27:01.985 11:54:34 -- common/autotest_common.sh@829 -- # '[' -z 87089 ']' 00:27:01.985 11:54:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:01.985 11:54:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.985 11:54:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:01.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:01.985 11:54:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.985 11:54:34 -- common/autotest_common.sh@10 -- # set +x 00:27:01.985 [2024-11-20 11:54:34.893244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:01.985 [2024-11-20 11:54:34.893366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:27:01.985 Zero copy mechanism will not be used. 00:27:01.985 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87089 ] 00:27:02.245 [2024-11-20 11:54:35.031926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.245 [2024-11-20 11:54:35.109676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.814 11:54:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.814 11:54:35 -- common/autotest_common.sh@862 -- # return 0 00:27:02.814 11:54:35 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:27:02.814 11:54:35 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:27:02.815 11:54:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:03.074 11:54:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.074 11:54:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.334 nvme0n1 00:27:03.334 11:54:36 -- host/digest.sh@91 -- # bperf_py perform_tests 00:27:03.334 11:54:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.334 Zero copy mechanism will not be used. 00:27:03.334 Running I/O for 2 seconds... 00:27:05.871 00:27:05.871 Latency(us) 00:27:05.871 [2024-11-20T11:54:38.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.871 [2024-11-20T11:54:38.914Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:05.871 nvme0n1 : 2.00 12291.04 1536.38 0.00 0.00 1299.16 987.33 9329.58 00:27:05.871 [2024-11-20T11:54:38.914Z] =================================================================================================================== 00:27:05.871 [2024-11-20T11:54:38.914Z] Total : 12291.04 1536.38 0.00 0.00 1299.16 987.33 9329.58 00:27:05.871 0 00:27:05.871 11:54:38 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:27:05.871 11:54:38 -- host/digest.sh@92 -- # get_accel_stats 00:27:05.871 11:54:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:05.871 11:54:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:05.871 11:54:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:05.871 | select(.opcode=="crc32c") 00:27:05.871 | "\(.module_name) \(.executed)"' 00:27:05.871 11:54:38 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:27:05.871 11:54:38 -- host/digest.sh@93 -- # exp_module=software 00:27:05.871 11:54:38 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:27:05.871 11:54:38 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:05.871 11:54:38 -- host/digest.sh@97 -- # killprocess 87089 00:27:05.871 11:54:38 -- common/autotest_common.sh@936 -- # '[' -z 87089 ']' 00:27:05.871 11:54:38 -- common/autotest_common.sh@940 -- # kill -0 87089 00:27:05.871 11:54:38 -- common/autotest_common.sh@941 -- # uname 00:27:05.871 11:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:05.871 11:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87089 00:27:05.871 11:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:05.871 killing process with pid 87089 00:27:05.871 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.871 00:27:05.871 Latency(us) 00:27:05.871 [2024-11-20T11:54:38.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.872 [2024-11-20T11:54:38.915Z] =================================================================================================================== 00:27:05.872 [2024-11-20T11:54:38.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.872 11:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:05.872 11:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87089' 00:27:05.872 11:54:38 -- common/autotest_common.sh@955 -- # kill 87089 00:27:05.872 11:54:38 -- common/autotest_common.sh@960 -- # wait 87089 00:27:05.872 11:54:38 -- host/digest.sh@126 -- # killprocess 86779 00:27:05.872 11:54:38 -- common/autotest_common.sh@936 -- # '[' -z 86779 ']' 00:27:05.872 11:54:38 -- common/autotest_common.sh@940 -- # kill -0 86779 00:27:05.872 11:54:38 -- common/autotest_common.sh@941 -- # uname 00:27:05.872 11:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:05.872 11:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86779 00:27:05.872 killing process with pid 86779 00:27:05.872 11:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:05.872 11:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:05.872 11:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86779' 00:27:05.872 11:54:38 -- common/autotest_common.sh@955 -- # kill 86779 00:27:05.872 11:54:38 -- common/autotest_common.sh@960 -- # wait 86779 00:27:06.132 00:27:06.132 real 0m17.197s 00:27:06.132 user 0m31.934s 00:27:06.133 sys 0m4.452s 00:27:06.133 11:54:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.133 ************************************ 00:27:06.133 END TEST nvmf_digest_clean 00:27:06.133 ************************************ 00:27:06.133 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.133 11:54:39 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:27:06.133 11:54:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.133 11:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.133 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.133 ************************************ 00:27:06.133 START TEST nvmf_digest_error 00:27:06.133 ************************************ 00:27:06.133 11:54:39 -- common/autotest_common.sh@1114 -- # run_digest_error 00:27:06.133 11:54:39 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:27:06.133 11:54:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:06.133 11:54:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.133 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.133 11:54:39 -- nvmf/common.sh@469 -- # nvmfpid=87202 00:27:06.133 11:54:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:06.133 11:54:39 -- nvmf/common.sh@470 -- # waitforlisten 87202 00:27:06.133 11:54:39 -- common/autotest_common.sh@829 -- # '[' -z 87202 ']' 00:27:06.133 11:54:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.133 11:54:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.133 11:54:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.133 11:54:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.133 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.391 [2024-11-20 11:54:39.212690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.391 [2024-11-20 11:54:39.212746] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.391 [2024-11-20 11:54:39.333631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.391 [2024-11-20 11:54:39.416319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:06.391 [2024-11-20 11:54:39.416435] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.391 [2024-11-20 11:54:39.416441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.391 [2024-11-20 11:54:39.416445] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.391 [2024-11-20 11:54:39.416467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.329 11:54:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.330 11:54:40 -- common/autotest_common.sh@862 -- # return 0 00:27:07.330 11:54:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:07.330 11:54:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.330 11:54:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.330 11:54:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.330 11:54:40 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:07.330 11:54:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.330 11:54:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.330 [2024-11-20 11:54:40.131485] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:07.330 11:54:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.330 11:54:40 -- host/digest.sh@104 -- # common_target_config 00:27:07.330 11:54:40 -- host/digest.sh@43 -- # rpc_cmd 00:27:07.330 11:54:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.330 11:54:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.330 null0 00:27:07.330 [2024-11-20 11:54:40.223622] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.330 [2024-11-20 11:54:40.247676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.330 11:54:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.330 11:54:40 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:27:07.330 11:54:40 -- host/digest.sh@54 -- # local rw bs qd 00:27:07.330 11:54:40 -- host/digest.sh@56 -- # rw=randread 00:27:07.330 11:54:40 -- host/digest.sh@56 -- # bs=4096 00:27:07.330 11:54:40 -- host/digest.sh@56 -- # qd=128 00:27:07.330 11:54:40 -- host/digest.sh@58 -- # bperfpid=87246 00:27:07.330 11:54:40 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:07.330 11:54:40 -- host/digest.sh@60 -- # waitforlisten 87246 /var/tmp/bperf.sock 00:27:07.330 11:54:40 -- common/autotest_common.sh@829 -- # '[' -z 87246 ']' 00:27:07.330 11:54:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.330 11:54:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.330 11:54:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.330 11:54:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.330 11:54:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.330 [2024-11-20 11:54:40.308773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:07.330 [2024-11-20 11:54:40.308847] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87246 ] 00:27:07.589 [2024-11-20 11:54:40.432851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.589 [2024-11-20 11:54:40.515292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.158 11:54:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.158 11:54:41 -- common/autotest_common.sh@862 -- # return 0 00:27:08.158 11:54:41 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.158 11:54:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.418 11:54:41 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:08.418 11:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.418 11:54:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.418 11:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.418 11:54:41 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.418 11:54:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.678 nvme0n1 00:27:08.678 11:54:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:08.678 11:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.678 11:54:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.678 11:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.678 11:54:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.678 11:54:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.940 Running I/O for 2 seconds... 00:27:08.940 [2024-11-20 11:54:41.765411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.765520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.765531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.773219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.773254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.773262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.783766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.783802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.783810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.793366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.793397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.793404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.800677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.800707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.800714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.808974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.809019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.809026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.816387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.816475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.826807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.826837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.826844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.834337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.834420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.834429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.844414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.844446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.844453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.853072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.853141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.853149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.862177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.862209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.862217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.870737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.870767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.870774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.878964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.878993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.879000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.887954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.888039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.888048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.899116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.899199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.899208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.907516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.907545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.907552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.916024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.916053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.923874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.923911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.933269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.933301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.933308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.942322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.942407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.940 [2024-11-20 11:54:41.942416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.940 [2024-11-20 11:54:41.952270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.940 [2024-11-20 11:54:41.952302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.941 [2024-11-20 11:54:41.952310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.941 [2024-11-20 11:54:41.961221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.941 [2024-11-20 11:54:41.961288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.941 [2024-11-20 11:54:41.961297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.941 [2024-11-20 11:54:41.968933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:08.941 [2024-11-20 11:54:41.968964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.941 [2024-11-20 11:54:41.968971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:41.979903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:41.979975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:41.979983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:41.989024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:41.989056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:41.989063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:41.997898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:41.997927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:41.997935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.006195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.006225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.006231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.013976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.014013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.023728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.023756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.023763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.033057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.033088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.033095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.040396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.040426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.040434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.049877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.049906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.049913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.061021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.061052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.061059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.069725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.069753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.069760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.078631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.078672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.078695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.087780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.087832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.095961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.095989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.095997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.104370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.104400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.104407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.113372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.113404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.113412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.121525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.121596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.121605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.130695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.130725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.130732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.138923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.202 [2024-11-20 11:54:42.138952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.202 [2024-11-20 11:54:42.138959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.202 [2024-11-20 11:54:42.148349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.148378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.148386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.156635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.156676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.156684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.168578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.168610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.168617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.177322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.177392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.177400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.187476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.187507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.187514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.195962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.196030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.196039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.203926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.203956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.203963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.212395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.212426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.212434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.220953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.220985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.220992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.229431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.229464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.229471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.203 [2024-11-20 11:54:42.239060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.203 [2024-11-20 11:54:42.239135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.203 [2024-11-20 11:54:42.239144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.247720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.247748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.247755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.255893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.255924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.255931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.264256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.264288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.264295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.273461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.273493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.273500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.280851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.280880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.280887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.288815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.288843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.288850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.298723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.298753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.298760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.306924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.306954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.306961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.464 [2024-11-20 11:54:42.317074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.464 [2024-11-20 11:54:42.317106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.464 [2024-11-20 11:54:42.317113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.328251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.328323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.328332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.338641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.338686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.338694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.349594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.349626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.349633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.358686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.358715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.358722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.366939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.366968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.366975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.377871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.377954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.377964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.387363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.387395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.387403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.396608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.396642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.396649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.404136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.404174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.415364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.415395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.415402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.425816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.425845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.425853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.434202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.434231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.434237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.442087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.442117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.442124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.450449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.450479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.450486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.459389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.459422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.459429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.470932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.470959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.470966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.479440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.479478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.487509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.487541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.487548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.495693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.495722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.495729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.465 [2024-11-20 11:54:42.503056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.465 [2024-11-20 11:54:42.503085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.465 [2024-11-20 11:54:42.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.511136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.511164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.511172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.519041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.519071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.519078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.528970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.529042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.529051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.536904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.536933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.536940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.545398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.545430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.545437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.553687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.553717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.553724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.562920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.562948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.562954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.570681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.570709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.579589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.579618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.579625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.587758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.587808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.587816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.595272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.595357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.595366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.604707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.604738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.604745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.614996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.615026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.615032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.625802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.625832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.625839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.634747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.634776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.643412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.726 [2024-11-20 11:54:42.643443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.726 [2024-11-20 11:54:42.643451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.726 [2024-11-20 11:54:42.655134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.655164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.655172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.666573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.666604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.666611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.676991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.677062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.677070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.686679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.686708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.686715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.698390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.698424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.698432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.707293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.707324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.715388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.715460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.715469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.726959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.726990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.726997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.737601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.737634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.749108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.749139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.749147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.727 [2024-11-20 11:54:42.758862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.727 [2024-11-20 11:54:42.758891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.727 [2024-11-20 11:54:42.758899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.766947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.766977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.766984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.778724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.778752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.778759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.789069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.789100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.789107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.799585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.799617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.799624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.807647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.807701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.807709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.816115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.816147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.816154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.824975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.825004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.825011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.835360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.835391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.835398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.845419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.845449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.845457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.853051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.853082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.853089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.863705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.863734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.863742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.873168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.873199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.873206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.988 [2024-11-20 11:54:42.883593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.988 [2024-11-20 11:54:42.883624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.988 [2024-11-20 11:54:42.883631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.892087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.892117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.892124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.900161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.900192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.900199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.910146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.910232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.910241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.917903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.917933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.917939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.926578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.926610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.926617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.937486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.937573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.937582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.948122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.948189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.948198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.956316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.956347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.956354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.965249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.965278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.965285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.974190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.974220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.974227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.981977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.982007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.982013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:42.992372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:42.992401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:42.992409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:43.003075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:43.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:43.003170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:43.011980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:43.012011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:43.012019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.989 [2024-11-20 11:54:43.021206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:09.989 [2024-11-20 11:54:43.021237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.989 [2024-11-20 11:54:43.021244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.030764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.030792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.030799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.040672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.040701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.040708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.049396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.049427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.049434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.057344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.057375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.057382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.068709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.068737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.068745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.078988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.079018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.079025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.089523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.089554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.089561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.099229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.099267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.099274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.106809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.106837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.106844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.117235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.117323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.117333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.128483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.128552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.128561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.137347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.137379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.137386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.147253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.147284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.147291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.157692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.157722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.157729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.165530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.165562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.165569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.173500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.173531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.173538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.182680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.266 [2024-11-20 11:54:43.182708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.266 [2024-11-20 11:54:43.182715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.266 [2024-11-20 11:54:43.193913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.193998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.194006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.204929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.205013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.205022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.213750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.213780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.213786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.222291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.222322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.222329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.231504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.231535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.231542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.240969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.241000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.241007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.250381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.250414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.250421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.258846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.258932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.258941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.266591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.266685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.266694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.276163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.276194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.276201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.283761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.283797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.283804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.267 [2024-11-20 11:54:43.292124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.267 [2024-11-20 11:54:43.292155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.267 [2024-11-20 11:54:43.292162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.537 [2024-11-20 11:54:43.300025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.537 [2024-11-20 11:54:43.300056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.537 [2024-11-20 11:54:43.300062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.537 [2024-11-20 11:54:43.308932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.537 [2024-11-20 11:54:43.308962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.537 [2024-11-20 11:54:43.308969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.319458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.319489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.319496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.329998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.330083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.330092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.337630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.337727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.337736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.346744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.346772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.346779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.355916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.355946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.355953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.365177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.365260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.365269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.373630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.373671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.373695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.382718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.382747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.382754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.392423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.392508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.399857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.399889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.399896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.410970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.411049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.411058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.420566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.420598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.429399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.429432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.429439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.438303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.438391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.438400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.447669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.447699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.447706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.456109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.456140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.456147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.464854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.464925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.464934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.472889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.472919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.472926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.482744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.482773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.482780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.492487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.492575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.492584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.538 [2024-11-20 11:54:43.503296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.538 [2024-11-20 11:54:43.503328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.538 [2024-11-20 11:54:43.503335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.511214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.511280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.511288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.519724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.519753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.519760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.528273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.528307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.528314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.536875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.536905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.536912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.546494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.546566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.546575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.557456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.557533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.566741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.566770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.566777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.539 [2024-11-20 11:54:43.577580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.539 [2024-11-20 11:54:43.577677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.539 [2024-11-20 11:54:43.577686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.588189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.588221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.588228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.598759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.598789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.598796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.606608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.606639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.614470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.614502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.614509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.622512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.622543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.622550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.630843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.630872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.630879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.638854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.638885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.638893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.649162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.649248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.649257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.660608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.660694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.660703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.669523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.669556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.669564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.677479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.677512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.677520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.687112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.687145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.687153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.697548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.799 [2024-11-20 11:54:43.697579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.799 [2024-11-20 11:54:43.697587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.799 [2024-11-20 11:54:43.706096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.800 [2024-11-20 11:54:43.706125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.800 [2024-11-20 11:54:43.706132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.800 [2024-11-20 11:54:43.714145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.800 [2024-11-20 11:54:43.714174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.800 [2024-11-20 11:54:43.714182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.800 [2024-11-20 11:54:43.722214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.800 [2024-11-20 11:54:43.722244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.800 [2024-11-20 11:54:43.722251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.800 [2024-11-20 11:54:43.730354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.800 [2024-11-20 11:54:43.730385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.800 [2024-11-20 11:54:43.730392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.800 [2024-11-20 11:54:43.738324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.800 [2024-11-20 11:54:43.738358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.800 [2024-11-20 11:54:43.738365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.800 [2024-11-20 11:54:43.745671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b3f50) 00:27:10.800 [2024-11-20 11:54:43.745700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.800 [2024-11-20 11:54:43.745707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.800 00:27:10.800 Latency(us) 00:27:10.800 [2024-11-20T11:54:43.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.800 [2024-11-20T11:54:43.843Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:10.800 nvme0n1 : 2.00 27657.42 108.04 0.00 0.00 4623.63 2089.14 14767.06 00:27:10.800 [2024-11-20T11:54:43.843Z] =================================================================================================================== 00:27:10.800 [2024-11-20T11:54:43.843Z] Total : 27657.42 108.04 0.00 0.00 4623.63 2089.14 14767.06 00:27:10.800 0 00:27:10.800 11:54:43 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.800 11:54:43 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.800 11:54:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.800 11:54:43 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.800 | .driver_specific 00:27:10.800 | .nvme_error 00:27:10.800 | .status_code 00:27:10.800 | .command_transient_transport_error' 00:27:11.060 11:54:43 -- host/digest.sh@71 -- # (( 217 > 0 )) 00:27:11.060 11:54:43 -- host/digest.sh@73 -- # killprocess 87246 00:27:11.060 11:54:43 -- common/autotest_common.sh@936 -- # '[' -z 87246 ']' 00:27:11.060 11:54:43 -- common/autotest_common.sh@940 -- # kill -0 87246 00:27:11.060 11:54:43 -- common/autotest_common.sh@941 -- # uname 00:27:11.060 11:54:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:11.060 11:54:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87246 00:27:11.060 killing process with pid 87246 00:27:11.060 Received shutdown signal, test time was about 2.000000 seconds 00:27:11.060 00:27:11.060 Latency(us) 00:27:11.060 [2024-11-20T11:54:44.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.060 [2024-11-20T11:54:44.103Z] =================================================================================================================== 00:27:11.060 [2024-11-20T11:54:44.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:11.060 11:54:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:11.060 11:54:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:11.060 11:54:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87246' 00:27:11.060 11:54:44 -- common/autotest_common.sh@955 -- # kill 87246 00:27:11.060 11:54:44 -- common/autotest_common.sh@960 -- # wait 87246 00:27:11.321 11:54:44 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:27:11.321 11:54:44 -- host/digest.sh@54 -- # local rw bs qd 00:27:11.321 11:54:44 -- host/digest.sh@56 -- # rw=randread 00:27:11.321 11:54:44 -- host/digest.sh@56 -- # bs=131072 00:27:11.321 11:54:44 -- host/digest.sh@56 -- # qd=16 00:27:11.321 11:54:44 -- host/digest.sh@58 -- # bperfpid=87331 00:27:11.321 11:54:44 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:11.321 11:54:44 -- host/digest.sh@60 -- # waitforlisten 87331 /var/tmp/bperf.sock 00:27:11.321 11:54:44 -- common/autotest_common.sh@829 -- # '[' -z 87331 ']' 00:27:11.321 11:54:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.321 11:54:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.321 11:54:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.321 11:54:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.321 11:54:44 -- common/autotest_common.sh@10 -- # set +x 00:27:11.321 [2024-11-20 11:54:44.274725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:11.321 [2024-11-20 11:54:44.274841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.321 Zero copy mechanism will not be used. 00:27:11.321 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87331 ] 00:27:11.581 [2024-11-20 11:54:44.412594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.581 [2024-11-20 11:54:44.491638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.151 11:54:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.151 11:54:45 -- common/autotest_common.sh@862 -- # return 0 00:27:12.151 11:54:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.151 11:54:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.411 11:54:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:12.411 11:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.411 11:54:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.411 11:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.411 11:54:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.411 11:54:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.672 nvme0n1 00:27:12.672 11:54:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:12.672 11:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.672 11:54:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.672 11:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.672 11:54:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:12.672 11:54:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.672 Zero copy mechanism will not be used. 00:27:12.672 Running I/O for 2 seconds... 00:27:12.672 [2024-11-20 11:54:45.682985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.683060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.683070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.686048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.686083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.686107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.689382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.689417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.689424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.692798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.692832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.692856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.696139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.696175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.696182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.698556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.698586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.698594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.701086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.701121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.701128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.704225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.704261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.704268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.707293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.672 [2024-11-20 11:54:45.707323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.672 [2024-11-20 11:54:45.707330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.672 [2024-11-20 11:54:45.710563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.673 [2024-11-20 11:54:45.710596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.673 [2024-11-20 11:54:45.710604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.933 [2024-11-20 11:54:45.713665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.933 [2024-11-20 11:54:45.713708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.713715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.717219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.717256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.717263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.720787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.720822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.720845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.724021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.724055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.724062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.727377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.727408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.727415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.730787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.730812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.733963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.733989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.733997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.737226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.737256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.737263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.740561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.740598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.740606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.743760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.743797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.743805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.747114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.747147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.747171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.750397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.750432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.750439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.753689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.753723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.753730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.756978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.757013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.757021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.760144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.760180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.760187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.763212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.763241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.763248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.766635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.766676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.766683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.769568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.769602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.769626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.772155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.772190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.772198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.774877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.774919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.774926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.777342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.777378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.777401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.780440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.780475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.780483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.783208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.783239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.783246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-20 11:54:45.786172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.934 [2024-11-20 11:54:45.786206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-20 11:54:45.786213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.789346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.789380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.789403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.791769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.791824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.791831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.795059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.795089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.795097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.798222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.798258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.798265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.801238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.801273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.801296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.804564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.804610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.807859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.807889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.807897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.811067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.811096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.811103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.814336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.814371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.814379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.817768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.817802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.817809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.820839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.820874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.820882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.824140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.824174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.824181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.827333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.827363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.827370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.830540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.830572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.830579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.833749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.833782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.833789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.837133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.837167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.837174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.840362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.840409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.840416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.843578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.843608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.843616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.846792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.846821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.846828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.850077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.850112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.850119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.853404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.853439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.853446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.856603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.856638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-20 11:54:45.856646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-20 11:54:45.859845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.935 [2024-11-20 11:54:45.859872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.859879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.862978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.863015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.866303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.866339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.869529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.869564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.869571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.872705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.872738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.872746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.875928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.875962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.875969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.879171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.879202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.879209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.882348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.882381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.882388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.885646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.885691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.885699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.888993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.889029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.889048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.892120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.892163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.895295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.895325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.895332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.898534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.898566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.898574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.901612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.901647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.901664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.904812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.904848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.907780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.907807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.907815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.910935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.910964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.910970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.914156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.914190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.914213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.917304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.917339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.917362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.920431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.920466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.920474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.923509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.923539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.923545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.926608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.926637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.926660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.929488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.929520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.929542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.936 [2024-11-20 11:54:45.932869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.936 [2024-11-20 11:54:45.932904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.936 [2024-11-20 11:54:45.932911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.935893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.935927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.935934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.939099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.939128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.939135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.942324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.942358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.942381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.945392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.945428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.945451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.948594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.948628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.948636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.951842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.951869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.951876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.954951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.954981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.955004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.957952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.957986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.958008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.961167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.961203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.961227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.964164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.964198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.964221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.967567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.967597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.967604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.937 [2024-11-20 11:54:45.970712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:12.937 [2024-11-20 11:54:45.970739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.937 [2024-11-20 11:54:45.970746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.973730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.973760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.973783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.976663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.976693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.976700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.980115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.980149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.980156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.983299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.983329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.983352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.986336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.986367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.986389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.989457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.989489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.989513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.992430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.992463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.992470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.995730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.995765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:45.998892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:45.998922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:45.998945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.001957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.001991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.002014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.005015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.005061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.005084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.007993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.008026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.008033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.010995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.011024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.011030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.014010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.014043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.014066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.017204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.017238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.017260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.020372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.020408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.020416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.023430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.023459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.023466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.026701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.026731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.026738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.029772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.199 [2024-11-20 11:54:46.029805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.199 [2024-11-20 11:54:46.029828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.199 [2024-11-20 11:54:46.032990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.033039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.033046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.036127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.036163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.036170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.039260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.039290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.039296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.042409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.042439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.042446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.045573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.045606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.045628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.048575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.048611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.048618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.051754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.051796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.054957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.054986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.054992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.058090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.058124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.058147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.061258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.061294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.061316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.064271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.064307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.064314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.067250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.067278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.067285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.070312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.070344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.070366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.073319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.073353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.073375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.076529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.076564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.076571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.079332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.079360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.079383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.082426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.082457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.085437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.085471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.085494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.088717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.088750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.088773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.091665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.091708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.091715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.200 [2024-11-20 11:54:46.094550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.200 [2024-11-20 11:54:46.094578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.200 [2024-11-20 11:54:46.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.097581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.097616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.097639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.101097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.101132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.101156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.104208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.104244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.104251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.107383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.107414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.110267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.110297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.110319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.113415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.113451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.116621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.116666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.116674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.119703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.119730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.119737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.122707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.122734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.122740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.125754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.125786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.125809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.128934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.128971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.128977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.131659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.131712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.131719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.134714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.134740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.134747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.137744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.137775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.137782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.140869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.140926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.144075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.144109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.144132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.147297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.147325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.147332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.150423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.150454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.150477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.153557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.153591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.153614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.156873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.156907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.156914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.160150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.160185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.160192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.163108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.163135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.163142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.166208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.166241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.166265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.201 [2024-11-20 11:54:46.169453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.201 [2024-11-20 11:54:46.169488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.201 [2024-11-20 11:54:46.169510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.172631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.172677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.172684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.175642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.175695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.175701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.178724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.178752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.178758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.181693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.181726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.181749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.185231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.185266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.185289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.188244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.188278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.188301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.191495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.191524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.191531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.194585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.194617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.194624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.197761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.197794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.197817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.200825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.200860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.200868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.203907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.203941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.203948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.207022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.207051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.207074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.210054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.210111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.213211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.213247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.213253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.216353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.216411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.219346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.219374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.219381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.222362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.222393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.222400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.225381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.225414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.225437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.228511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.228545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.228553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.231689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.231715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.231722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.202 [2024-11-20 11:54:46.234960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.202 [2024-11-20 11:54:46.234990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.202 [2024-11-20 11:54:46.235014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.237968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.238002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.238024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.241114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.241150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.241174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.244121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.244156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.244178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.247157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.247186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.247193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.250253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.250285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.250291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.253359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.253393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.253416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.256524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.256559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.259492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.259522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.259528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.262566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.262596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.262620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.464 [2024-11-20 11:54:46.265574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.464 [2024-11-20 11:54:46.265608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.464 [2024-11-20 11:54:46.265630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.268767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.268801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.268808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.271833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.271860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.271866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.275043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.275073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.275096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.278196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.278230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.278253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.281392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.281427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.281450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.284583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.284619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.284626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.287456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.287484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.287491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.290600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.290630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.290636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.293792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.293826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.293849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.296957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.296992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.296999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.300053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.300089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.300111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.303230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.303259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.306283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.306316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.306323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.309393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.309427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.312231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.312264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.312271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.315245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.315273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.315280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.318484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.318516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.318522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.321410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.321444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.321450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.324601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.324634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.324640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.327635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.327688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.327695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.330750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.330778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.465 [2024-11-20 11:54:46.330785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.465 [2024-11-20 11:54:46.333853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.465 [2024-11-20 11:54:46.333886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.333909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.336941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.336974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.336981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.339952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.339985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.339991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.342973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.343002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.343025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.345937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.345971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.345993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.348998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.349059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.349066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.352028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.352062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.352069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.355130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.355160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.355183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.358146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.358179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.358202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.361275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.361311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.361333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.364453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.364487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.364495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.367529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.367558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.367581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.370460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.370489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.370512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.373653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.373710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.373717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.376781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.376814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.376821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.379869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.379901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.379908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.383040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.383070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.383077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.386067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.386101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.386124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.389195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.389230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.389253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.392270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.392305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.392328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.395557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.395586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.395593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.398587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.398618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.398640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.401644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.466 [2024-11-20 11:54:46.401701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.466 [2024-11-20 11:54:46.401709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.466 [2024-11-20 11:54:46.404799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.404832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.404840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.407958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.407992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.407999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.410835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.410863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.410869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.413985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.414020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.414043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.416870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.416900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.416906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.419986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.420015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.420021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.423076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.423102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.423109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.426023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.426052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.426059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.428991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.429019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.429025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.432049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.432081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.432088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.434900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.434928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.434936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.438552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.438586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.438594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.441925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.441968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.441976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.445374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.445404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.445412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.448648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.448689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.448696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.451564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.451590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.451596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.454860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.454886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.454893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.458111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.458142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.458148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.461309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.461354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.461361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.464204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.464232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.464238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.467564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.467592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.467599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.470530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.470558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.470564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.467 [2024-11-20 11:54:46.473601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.467 [2024-11-20 11:54:46.473630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.467 [2024-11-20 11:54:46.473637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.476739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.476767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.476773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.479726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.479750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.482787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.482813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.482819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.485883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.485911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.485918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.488971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.489000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.489007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.491989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.492019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.492026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.495112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.495139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.495145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.498177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.498206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.498213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.468 [2024-11-20 11:54:46.501220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.468 [2024-11-20 11:54:46.501248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.468 [2024-11-20 11:54:46.501255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.504211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.504240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.504246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.507144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.507171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.510475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.510503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.510509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.513784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.513811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.513818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.516445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.516476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.516483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.519633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.519685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.519692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.522805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.522832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.522839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.525885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.525914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.525920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.528860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.528889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.528896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.531951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.531978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.531985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.535080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.535106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.535113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.538216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.538241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.541084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.541112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.541119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.544091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.544122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.544130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.547102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.547128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.547135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.550133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.550160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.550167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.553335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.553364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.553371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.556306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.556336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.556342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.559551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.559577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.559583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.562503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.562528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.562534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.565535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.730 [2024-11-20 11:54:46.565564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 11:54:46.565571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 11:54:46.568720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.568747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.568754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.571716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.571740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.574810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.574836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.574842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.577711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.577737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.577744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.580889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.580918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.580925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.584015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.584044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.584050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.587065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.587092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.587098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.590260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.590288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.590294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.593399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.593429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.593435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.596444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.596472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.596479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.599634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.599669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.599692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.602602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.602628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.602635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.605836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.605864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.605870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.608718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.608745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.608752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.612056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.612086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.612093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.615178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.615205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.615212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.618237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.618264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.618270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.621509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.621539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.621545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.624850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.624880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.624887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.628049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.628078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.628084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.631199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.631226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.631232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.634332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.634359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.634365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 11:54:46.637312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.731 [2024-11-20 11:54:46.637340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 11:54:46.637347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.640688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.640715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.640723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.643830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.643855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.643862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.646830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.646854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.646861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.649641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.649677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.649684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.652907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.652936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.652943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.656115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.656144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.656151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.659128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.659154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.659161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.662121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.662150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.662157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.665269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.665298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.665305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.668383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.668414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.668420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.671426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.671453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.674439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.674465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.674472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.677428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.677457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.677463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.680335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.680365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.680371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.683694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.683719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.683726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.686772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.686800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.686807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.689802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.689831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.689838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.692888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.692917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.692924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.695977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.696005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.696012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.698928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.698953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.698960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.701900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.701928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.701945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.705170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.705198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.705205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.708232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.708262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 11:54:46.711216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.732 [2024-11-20 11:54:46.711241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 11:54:46.711248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.714299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.714326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.714332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.717221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.717249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.717255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.720255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.720284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.720291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.723343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.723369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.723375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.726183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.726208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.729347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.729375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.729382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.732367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.732396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.732402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.735619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.735647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.735663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.738642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.738677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.738685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.741722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.741749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.741756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.744889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.744917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.744924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.747885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.747913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.747920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.751200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.751229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.751235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.754510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.754539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.754545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.757697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.757725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.757731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.760977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.761019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.761026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.764268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.764298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.764305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.733 [2024-11-20 11:54:46.767547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.733 [2024-11-20 11:54:46.767574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.733 [2024-11-20 11:54:46.767581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.770601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.770628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.770635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.774193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.774230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.777126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.777155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.777162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.780355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.780385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.780392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.783497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.783525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.783532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.786282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.786309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.786316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.788912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.788942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.788949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.791956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.791984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.791991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.794641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.794678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.794685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.797589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.797618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.797625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.800702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.800729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.800736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.803925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.803953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.803960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.806987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.807023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.807030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.810127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.810156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.810163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 11:54:46.813319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.995 [2024-11-20 11:54:46.813349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 11:54:46.813356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.816558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.816588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.816594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.819513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.819539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.819546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.822792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.822818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.822825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.826132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.826162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.826168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.829297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.829332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.832367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.832396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.832403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.835381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.835408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.835415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.838303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.838330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.838336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.841528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.841557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.841564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.844715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.844742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.844749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.847878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.847905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.847912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.850835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.850861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.850868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.853777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.853804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.853811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.857109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.857139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.857146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.859945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.859974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.859981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.862903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.862930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.862937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.866073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.866102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.866109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.869244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.869274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.869281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.872203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.872232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.872239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.875334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.875360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.875367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.878335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.878362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.878368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.881451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.881479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.881486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.884728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.884761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.887728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.887750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.887756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.890726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.890750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.890757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.893867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.893896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.893903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.996 [2024-11-20 11:54:46.897047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.996 [2024-11-20 11:54:46.897076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.996 [2024-11-20 11:54:46.897083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.900239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.900268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.900276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.903496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.903524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.903531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.906882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.906910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.906917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.910080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.910110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.910117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.913482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.913512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.913519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.916715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.916742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.916749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.919919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.919948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.919955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.923210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.923238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.923245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.926681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.926707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.926713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.930201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.930234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.930242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.933707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.933731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.933739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.937103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.937133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.937141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.940522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.940552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.940560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.943697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.943727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.946831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.946858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.946866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.949914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.949941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.949948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.953141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.953171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.953178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.956578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.956607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.956614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.959887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.959933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.959940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.963220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.963247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.963254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.966148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.966174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.966181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.969462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.969492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.969499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.972714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.972749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.975752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.975784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.975807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.978856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.978883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.978890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.982232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.982262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.982268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.997 [2024-11-20 11:54:46.985428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.997 [2024-11-20 11:54:46.985458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.997 [2024-11-20 11:54:46.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:46.988617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:46.988646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:46.988664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:46.991718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:46.991742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:46.991749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:46.994792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:46.994820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:46.994828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:46.997919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:46.997948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:46.997955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.001123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.001152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.001159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.004229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.004259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.004266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.007520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.007547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.007554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.010647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.010683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.010707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.014022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.014053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.014059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.017117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.017146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.017153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.020191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.020219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.020225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.023330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.023364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.026594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.026621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.026628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.029853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.029880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.029887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.998 [2024-11-20 11:54:47.032950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:13.998 [2024-11-20 11:54:47.032990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.998 [2024-11-20 11:54:47.032997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.260 [2024-11-20 11:54:47.036018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.260 [2024-11-20 11:54:47.036047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.260 [2024-11-20 11:54:47.036054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.260 [2024-11-20 11:54:47.039376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.260 [2024-11-20 11:54:47.039404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.260 [2024-11-20 11:54:47.039411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.042546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.042574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.042580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.045572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.045602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.045609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.048690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.048717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.048724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.051911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.051940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.051947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.055101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.055128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.055135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.058198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.058225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.061559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.061587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.061594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.064865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.064895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.064902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.067318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.067344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.067350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.070460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.070488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.070495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.073475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.073512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.076729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.076756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.076763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.079957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.079987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.079993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.083193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.083222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.083228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.086329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.086356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.086363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.089683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.089710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.089717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.093022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.093051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.093058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.096014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.096042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.096049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.098920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.098947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.098953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.102119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.102149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.105274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.105304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.261 [2024-11-20 11:54:47.105311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.261 [2024-11-20 11:54:47.108656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.261 [2024-11-20 11:54:47.108695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.108702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.111888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.111914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.111921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.115077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.115104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.115111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.118177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.118205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.118211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.121213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.121243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.121250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.124517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.124548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.124555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.127438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.127463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.127470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.129952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.129978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.129984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.133205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.133234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.133241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.136271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.136300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.136306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.139280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.139306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.139313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.142461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.142488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.142495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.145473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.145502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.145509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.148390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.148418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.148425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.151633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.151693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.154847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.154874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.154881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.157966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.158001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.161158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.161187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.161194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.164313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.164343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.164350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.167335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.167362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.167368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.170282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.170308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.173492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.173521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.173528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.176503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.262 [2024-11-20 11:54:47.176531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.262 [2024-11-20 11:54:47.176538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.262 [2024-11-20 11:54:47.179644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.179680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.179686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.182907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.182935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.182941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.186020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.186049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.186056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.189079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.189108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.189114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.192309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.192338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.192344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.195354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.195381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.195388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.198389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.198415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.198422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.201642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.201696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.201703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.204679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.204705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.204712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.207849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.207875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.207883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.211018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.211044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.211050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.214030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.214058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.214064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.217117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.217145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.217152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.220252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.220280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.220287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.223372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.223398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.223405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.226403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.226430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.226436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.229390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.229418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.229425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.232630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.232670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.232678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.235708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.235733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.235739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.238945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.238973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.238979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.241836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.241865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.241872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.245205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.245235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.245241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.248257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.248287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.248293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.251280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.263 [2024-11-20 11:54:47.251305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.263 [2024-11-20 11:54:47.251312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.263 [2024-11-20 11:54:47.254163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.254188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.254195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.257297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.257327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.257334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.260459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.260489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.260495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.263569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.263596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.263603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.266827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.266853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.266860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.269975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.270004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.270011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.273140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.273170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.273177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.276295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.276325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.276331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.279346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.279372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.279379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.282436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.282463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.282469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.285416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.285445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.285452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.288627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.288667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.288674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.291685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.291709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.291716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.294826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.294852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.294859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.264 [2024-11-20 11:54:47.297883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.264 [2024-11-20 11:54:47.297911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.264 [2024-11-20 11:54:47.297918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.525 [2024-11-20 11:54:47.300836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.525 [2024-11-20 11:54:47.300866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.525 [2024-11-20 11:54:47.300872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.525 [2024-11-20 11:54:47.304090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.304120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.304126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.307157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.307183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.307189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.310200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.310227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.310234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.313163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.313192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.313199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.316325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.316355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.316362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.319491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.319518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.322580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.322607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.322614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.325742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.325770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.325777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.328786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.328817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.328824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.331972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.332015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.332022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.335126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.335153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.335160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.338158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.338187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.338193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.341178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.341207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.341214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.344246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.344276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.344283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.347347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.347373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.347380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.350519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.350546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.350553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.353693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.353719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.353726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.356808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.356837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.356844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.359865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.359891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.359898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.362831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.362856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.362862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.366106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.366135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.366142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.369073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.526 [2024-11-20 11:54:47.369102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.526 [2024-11-20 11:54:47.369108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.526 [2024-11-20 11:54:47.372068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.372097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.372104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.375091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.375117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.375124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.377682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.377706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.377712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.380458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.380490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.380497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.382897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.382922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.382928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.385763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.385793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.385800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.388126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.388156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.388163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.390767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.390793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.390800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.393416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.393445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.393452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.396442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.396471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.396478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.399760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.399790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.399813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.402753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.402779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.402785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.405893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.405922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.405929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.408859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.408894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.411825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.411850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.411857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.414748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.414771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.414778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.417760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.417787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.417793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.420821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.420852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.420859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.423944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.423972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.423979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.426800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.426826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.429724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.429750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.429757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.432784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.432812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.432819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.435896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.435925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.435932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.438872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.438897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.438904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.442083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.442111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.527 [2024-11-20 11:54:47.442118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.527 [2024-11-20 11:54:47.445244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.527 [2024-11-20 11:54:47.445273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.445280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.448285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.448316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.448322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.451413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.451439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.451446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.454579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.454606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.454613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.457743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.457771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.457778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.460726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.460752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.460758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.463530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.463555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.463562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.466808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.466834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.466841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.469966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.470002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.473034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.473063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.473070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.476060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.476089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.476095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.479014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.479042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.479048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.481444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.481472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.481478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.484663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.484702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.484708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.487612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.487638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.487644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.490767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.490791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.490798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.493762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.493789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.493796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.496749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.496774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.496781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.499791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.499831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.499838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.502967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.502994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.503001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.506126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.506154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.506161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.509133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.509163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.509169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.512115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.512144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.512151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.515207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.515233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.518109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.528 [2024-11-20 11:54:47.518134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.528 [2024-11-20 11:54:47.518141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.528 [2024-11-20 11:54:47.521327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.521357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.521363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.524422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.524452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.524459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.527497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.527523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.527531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.530773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.530799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.530806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.533798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.533826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.533833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.537030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.537059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.537065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.540129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.540158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.540165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.543339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.543365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.543372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.546450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.546477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.546484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.549770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.549804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.552903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.552933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.552939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.555880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.555907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.555913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.559030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.559056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.559063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.529 [2024-11-20 11:54:47.562069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.529 [2024-11-20 11:54:47.562096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-11-20 11:54:47.562103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.565155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.565184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.565191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.568319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.568349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.568355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.571466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.571493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.571500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.574534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.574561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.574568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.577439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.577467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.577473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.580520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.580549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.580555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.583484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.583511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.583517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.586743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.586768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.789 [2024-11-20 11:54:47.586775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.789 [2024-11-20 11:54:47.589845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.789 [2024-11-20 11:54:47.589874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.589880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.592912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.592942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.592949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.595936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.595965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.595971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.598991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.599028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.599034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.602269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.602297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.602305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.605266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.605294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.605301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.608375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.608404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.608411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.611649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.611683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.611706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.614701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.614725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.614732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.617898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.617926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.617933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.620898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.620928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.620935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.623973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.624001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.624008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.627213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.627239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.627246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.630253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.630280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.630287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.633269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.633296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.633303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.636423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.636455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.636462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.639585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.639612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.639618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.642559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.642586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.642592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.645441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.645468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.645474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.648712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.648739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.648746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.651672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.651707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.651713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.654716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.654740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.654746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.657910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.657938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.657945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.660887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.660915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.660922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.664126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.664154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.664161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.667083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.667108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.667115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-11-20 11:54:47.670167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b67e0) 00:27:14.790 [2024-11-20 11:54:47.670193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-11-20 11:54:47.670200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 00:27:14.790 Latency(us) 00:27:14.790 [2024-11-20T11:54:47.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.790 [2024-11-20T11:54:47.834Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:14.791 nvme0n1 : 2.00 9919.63 1239.95 0.00 0.00 1610.51 790.58 7841.43 00:27:14.791 [2024-11-20T11:54:47.834Z] =================================================================================================================== 00:27:14.791 [2024-11-20T11:54:47.834Z] Total : 9919.63 1239.95 0.00 0.00 1610.51 790.58 7841.43 00:27:14.791 0 00:27:14.791 11:54:47 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:14.791 11:54:47 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:14.791 11:54:47 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:14.791 | .driver_specific 00:27:14.791 | .nvme_error 00:27:14.791 | .status_code 00:27:14.791 | .command_transient_transport_error' 00:27:14.791 11:54:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.050 11:54:47 -- host/digest.sh@71 -- # (( 640 > 0 )) 00:27:15.050 11:54:47 -- host/digest.sh@73 -- # killprocess 87331 00:27:15.050 11:54:47 -- common/autotest_common.sh@936 -- # '[' -z 87331 ']' 00:27:15.050 11:54:47 -- common/autotest_common.sh@940 -- # kill -0 87331 00:27:15.050 11:54:47 -- common/autotest_common.sh@941 -- # uname 00:27:15.050 11:54:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.050 11:54:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87331 00:27:15.050 11:54:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:15.050 11:54:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:15.050 killing process with pid 87331 00:27:15.050 11:54:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87331' 00:27:15.050 11:54:47 -- common/autotest_common.sh@955 -- # kill 87331 00:27:15.050 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.050 00:27:15.050 Latency(us) 00:27:15.050 [2024-11-20T11:54:48.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.050 [2024-11-20T11:54:48.093Z] =================================================================================================================== 00:27:15.050 [2024-11-20T11:54:48.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.051 11:54:47 -- common/autotest_common.sh@960 -- # wait 87331 00:27:15.309 11:54:48 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:27:15.309 11:54:48 -- host/digest.sh@54 -- # local rw bs qd 00:27:15.309 11:54:48 -- host/digest.sh@56 -- # rw=randwrite 00:27:15.309 11:54:48 -- host/digest.sh@56 -- # bs=4096 00:27:15.309 11:54:48 -- host/digest.sh@56 -- # qd=128 00:27:15.309 11:54:48 -- host/digest.sh@58 -- # bperfpid=87420 00:27:15.309 11:54:48 -- host/digest.sh@60 -- # waitforlisten 87420 /var/tmp/bperf.sock 00:27:15.309 11:54:48 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:15.309 11:54:48 -- common/autotest_common.sh@829 -- # '[' -z 87420 ']' 00:27:15.309 11:54:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:15.309 11:54:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:15.309 11:54:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:15.309 11:54:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.309 11:54:48 -- common/autotest_common.sh@10 -- # set +x 00:27:15.309 [2024-11-20 11:54:48.223957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:15.309 [2024-11-20 11:54:48.224026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87420 ] 00:27:15.309 [2024-11-20 11:54:48.342302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.567 [2024-11-20 11:54:48.422358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.168 11:54:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.168 11:54:49 -- common/autotest_common.sh@862 -- # return 0 00:27:16.168 11:54:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:16.168 11:54:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:16.426 11:54:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:16.426 11:54:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.426 11:54:49 -- common/autotest_common.sh@10 -- # set +x 00:27:16.426 11:54:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.426 11:54:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.426 11:54:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.685 nvme0n1 00:27:16.685 11:54:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:16.685 11:54:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.685 11:54:49 -- common/autotest_common.sh@10 -- # set +x 00:27:16.685 11:54:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.685 11:54:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:16.685 11:54:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.685 Running I/O for 2 seconds... 00:27:16.685 [2024-11-20 11:54:49.634198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eea00 00:27:16.685 [2024-11-20 11:54:49.634978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.685 [2024-11-20 11:54:49.635003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.685 [2024-11-20 11:54:49.642587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea680 00:27:16.685 [2024-11-20 11:54:49.642940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.685 [2024-11-20 11:54:49.642966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.685 [2024-11-20 11:54:49.650906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2510 00:27:16.686 [2024-11-20 11:54:49.651208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.651229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.658988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e8d30 00:27:16.686 [2024-11-20 11:54:49.659267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.659282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.667093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ef270 00:27:16.686 [2024-11-20 11:54:49.667352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.667366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.675212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5220 00:27:16.686 [2024-11-20 11:54:49.675445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.675460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.683273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5658 00:27:16.686 [2024-11-20 11:54:49.683483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.683498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.693494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6738 00:27:16.686 [2024-11-20 11:54:49.694515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.694539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.699502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f3e60 00:27:16.686 [2024-11-20 11:54:49.699799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.699830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.709484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7970 00:27:16.686 [2024-11-20 11:54:49.710275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.710299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.716663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ed920 00:27:16.686 [2024-11-20 11:54:49.717488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.717513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.686 [2024-11-20 11:54:49.724762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1b48 00:27:16.686 [2024-11-20 11:54:49.725556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.686 [2024-11-20 11:54:49.725581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.733464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e12d8 00:27:16.945 [2024-11-20 11:54:49.734012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.734038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.741496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df118 00:27:16.945 [2024-11-20 11:54:49.742025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.742053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.748589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e3060 00:27:16.945 [2024-11-20 11:54:49.749389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.749414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.756766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e4de8 00:27:16.945 [2024-11-20 11:54:49.757518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.757543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.765875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ef270 00:27:16.945 [2024-11-20 11:54:49.766443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.766468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.773935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e88f8 00:27:16.945 [2024-11-20 11:54:49.774509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.774531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.782025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee5c8 00:27:16.945 [2024-11-20 11:54:49.782563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.782580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.790090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ecc78 00:27:16.945 [2024-11-20 11:54:49.790661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.790692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.797154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7970 00:27:16.945 [2024-11-20 11:54:49.797406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.797420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.806963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0788 00:27:16.945 [2024-11-20 11:54:49.807611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.807636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.814967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f4298 00:27:16.945 [2024-11-20 11:54:49.816206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.816232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.822225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:16.945 [2024-11-20 11:54:49.822907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.822931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.830287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:16.945 [2024-11-20 11:54:49.831158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.831182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.838647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5220 00:27:16.945 [2024-11-20 11:54:49.838930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.838954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.849430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e23b8 00:27:16.945 [2024-11-20 11:54:49.850429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.850452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.855472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6300 00:27:16.945 [2024-11-20 11:54:49.855641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.855669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.863654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6fa8 00:27:16.945 [2024-11-20 11:54:49.864023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.864040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.871936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1f80 00:27:16.945 [2024-11-20 11:54:49.872335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.872356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.879910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:16.945 [2024-11-20 11:54:49.880556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.880582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.889041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:16.945 [2024-11-20 11:54:49.889691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.889719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.896089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e8088 00:27:16.945 [2024-11-20 11:54:49.896836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.896860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.904278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e23b8 00:27:16.945 [2024-11-20 11:54:49.904584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.904602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.912497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ec408 00:27:16.945 [2024-11-20 11:54:49.912831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.912847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.921207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee5c8 00:27:16.945 [2024-11-20 11:54:49.922219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.922245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.929426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee190 00:27:16.945 [2024-11-20 11:54:49.929918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.945 [2024-11-20 11:54:49.929937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.945 [2024-11-20 11:54:49.939375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f3a28 00:27:16.945 [2024-11-20 11:54:49.940362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.946 [2024-11-20 11:54:49.940386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.946 [2024-11-20 11:54:49.945406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f6890 00:27:16.946 [2024-11-20 11:54:49.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.946 [2024-11-20 11:54:49.945683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.946 [2024-11-20 11:54:49.955326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e01f8 00:27:16.946 [2024-11-20 11:54:49.956128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.946 [2024-11-20 11:54:49.956221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.946 [2024-11-20 11:54:49.961397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:16.946 [2024-11-20 11:54:49.961528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.946 [2024-11-20 11:54:49.961544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.946 [2024-11-20 11:54:49.969742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fe720 00:27:16.946 [2024-11-20 11:54:49.969947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.946 [2024-11-20 11:54:49.969962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.946 [2024-11-20 11:54:49.978257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fa7d8 00:27:16.946 [2024-11-20 11:54:49.978477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.946 [2024-11-20 11:54:49.978492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:49.986926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df550 00:27:17.206 [2024-11-20 11:54:49.987821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:49.987850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:49.994954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5a90 00:27:17.206 [2024-11-20 11:54:49.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:49.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.003253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f20d8 00:27:17.206 [2024-11-20 11:54:50.003630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.003645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.011515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ef270 00:27:17.206 [2024-11-20 11:54:50.011949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.011966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.019605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190feb58 00:27:17.206 [2024-11-20 11:54:50.019972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.019987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.027644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1ca0 00:27:17.206 [2024-11-20 11:54:50.027960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.027974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.035638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eee38 00:27:17.206 [2024-11-20 11:54:50.035957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.035972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.043726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f9b30 00:27:17.206 [2024-11-20 11:54:50.043991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.044006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.051765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f6890 00:27:17.206 [2024-11-20 11:54:50.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.052065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.059823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fc998 00:27:17.206 [2024-11-20 11:54:50.060139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.060154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.067850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7538 00:27:17.206 [2024-11-20 11:54:50.068200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.068220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.075705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e95a0 00:27:17.206 [2024-11-20 11:54:50.076037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.076052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.085090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f96f8 00:27:17.206 [2024-11-20 11:54:50.086209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.086240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.093172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f4f40 00:27:17.206 [2024-11-20 11:54:50.094461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.094490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.101293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ed920 00:27:17.206 [2024-11-20 11:54:50.102615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.102644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.109403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ef6a8 00:27:17.206 [2024-11-20 11:54:50.110615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.110639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.117609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e8d30 00:27:17.206 [2024-11-20 11:54:50.118796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.118823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.124847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e0ea0 00:27:17.206 [2024-11-20 11:54:50.125511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.206 [2024-11-20 11:54:50.125540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.206 [2024-11-20 11:54:50.134575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fa7d8 00:27:17.206 [2024-11-20 11:54:50.135464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.135490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.140565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eee38 00:27:17.207 [2024-11-20 11:54:50.140782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.140797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.150310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea680 00:27:17.207 [2024-11-20 11:54:50.151605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.151634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.157615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7538 00:27:17.207 [2024-11-20 11:54:50.158471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.158555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.165756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190edd58 00:27:17.207 [2024-11-20 11:54:50.166713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.166742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.175335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1ca0 00:27:17.207 [2024-11-20 11:54:50.176056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.176079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.182633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fc560 00:27:17.207 [2024-11-20 11:54:50.183414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.183442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.190639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f35f0 00:27:17.207 [2024-11-20 11:54:50.191887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.191913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.198829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1710 00:27:17.207 [2024-11-20 11:54:50.199507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.199535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.206941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea248 00:27:17.207 [2024-11-20 11:54:50.207676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.207711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.215925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1ca0 00:27:17.207 [2024-11-20 11:54:50.216511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.216539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.223795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fc560 00:27:17.207 [2024-11-20 11:54:50.224500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.224568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.231969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5220 00:27:17.207 [2024-11-20 11:54:50.232715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.232742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.207 [2024-11-20 11:54:50.240062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f6458 00:27:17.207 [2024-11-20 11:54:50.240857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.207 [2024-11-20 11:54:50.240885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.248133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eaef0 00:27:17.468 [2024-11-20 11:54:50.248645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.248687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.256188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f92c0 00:27:17.468 [2024-11-20 11:54:50.256722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.256744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.264208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e23b8 00:27:17.468 [2024-11-20 11:54:50.264712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.264732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.271967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df988 00:27:17.468 [2024-11-20 11:54:50.272713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.272740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.279975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fbcf0 00:27:17.468 [2024-11-20 11:54:50.281160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.281189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.288059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ed0b0 00:27:17.468 [2024-11-20 11:54:50.289298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.289328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.296213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f3e60 00:27:17.468 [2024-11-20 11:54:50.297519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.297548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.304658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee5c8 00:27:17.468 [2024-11-20 11:54:50.305665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.305700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.312430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e88f8 00:27:17.468 [2024-11-20 11:54:50.312598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.312613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.320912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e3d08 00:27:17.468 [2024-11-20 11:54:50.321275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.321295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.328691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0ff8 00:27:17.468 [2024-11-20 11:54:50.329018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.329033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.336721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0350 00:27:17.468 [2024-11-20 11:54:50.337246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.337266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.345442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eb760 00:27:17.468 [2024-11-20 11:54:50.345809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.345825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.353580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f81e0 00:27:17.468 [2024-11-20 11:54:50.353987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-11-20 11:54:50.354002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.468 [2024-11-20 11:54:50.361735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f6458 00:27:17.469 [2024-11-20 11:54:50.362793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.370519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fac10 00:27:17.469 [2024-11-20 11:54:50.371265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.371293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.378918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e0a68 00:27:17.469 [2024-11-20 11:54:50.379454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.379475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.386061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2948 00:27:17.469 [2024-11-20 11:54:50.386961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.386989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.394114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2948 00:27:17.469 [2024-11-20 11:54:50.394980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.395010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.402315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2948 00:27:17.469 [2024-11-20 11:54:50.403144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.403170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.410422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2948 00:27:17.469 [2024-11-20 11:54:50.411327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.411354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.419283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ec408 00:27:17.469 [2024-11-20 11:54:50.420313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.420343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.427609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6b70 00:27:17.469 [2024-11-20 11:54:50.428175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.428194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.435337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5658 00:27:17.469 [2024-11-20 11:54:50.436231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.436261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.443844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2d80 00:27:17.469 [2024-11-20 11:54:50.444173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.444188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.454162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e4140 00:27:17.469 [2024-11-20 11:54:50.455026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.455053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.460501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2948 00:27:17.469 [2024-11-20 11:54:50.460713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.460790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.470891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190dfdc0 00:27:17.469 [2024-11-20 11:54:50.471551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.471575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.478203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fd208 00:27:17.469 [2024-11-20 11:54:50.478955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.478980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.486518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fda78 00:27:17.469 [2024-11-20 11:54:50.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.486856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.494818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df988 00:27:17.469 [2024-11-20 11:54:50.495160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.495180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.469 [2024-11-20 11:54:50.502976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6fa8 00:27:17.469 [2024-11-20 11:54:50.503504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-11-20 11:54:50.503539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.512364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6fa8 00:27:17.729 [2024-11-20 11:54:50.512951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.512970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.520769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190feb58 00:27:17.729 [2024-11-20 11:54:50.521370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.521400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.528006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ebfd0 00:27:17.729 [2024-11-20 11:54:50.528278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.528293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.537929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e12d8 00:27:17.729 [2024-11-20 11:54:50.539132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.539160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.546170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e23b8 00:27:17.729 [2024-11-20 11:54:50.546783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.546809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.554264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:17.729 [2024-11-20 11:54:50.554884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.562404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea248 00:27:17.729 [2024-11-20 11:54:50.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.729 [2024-11-20 11:54:50.563821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.729 [2024-11-20 11:54:50.570337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1430 00:27:17.729 [2024-11-20 11:54:50.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.570989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.578380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee5c8 00:27:17.730 [2024-11-20 11:54:50.579287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.579315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.587692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fbcf0 00:27:17.730 [2024-11-20 11:54:50.588538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.588563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.593822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6738 00:27:17.730 [2024-11-20 11:54:50.593948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.593962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.603953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea680 00:27:17.730 [2024-11-20 11:54:50.604577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.604604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.611741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190de038 00:27:17.730 [2024-11-20 11:54:50.612702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.612729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.619986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e0630 00:27:17.730 [2024-11-20 11:54:50.620393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.620411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.628836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f8618 00:27:17.730 [2024-11-20 11:54:50.629294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.629314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.636864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7538 00:27:17.730 [2024-11-20 11:54:50.637871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.637940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.645422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0bc0 00:27:17.730 [2024-11-20 11:54:50.646080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.646108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.653163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1ca0 00:27:17.730 [2024-11-20 11:54:50.654095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.654123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.661176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1710 00:27:17.730 [2024-11-20 11:54:50.661938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.669237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ecc78 00:27:17.730 [2024-11-20 11:54:50.670153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.670181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.677262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f92c0 00:27:17.730 [2024-11-20 11:54:50.677976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.678003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.685317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fd208 00:27:17.730 [2024-11-20 11:54:50.686040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.686067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.693593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e9168 00:27:17.730 [2024-11-20 11:54:50.694007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.694030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.701874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eaab8 00:27:17.730 [2024-11-20 11:54:50.702231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.702250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.709940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fda78 00:27:17.730 [2024-11-20 11:54:50.710277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.710296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.717961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e9e10 00:27:17.730 [2024-11-20 11:54:50.718271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.718295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.725956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ddc00 00:27:17.730 [2024-11-20 11:54:50.726244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.730 [2024-11-20 11:54:50.726259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.730 [2024-11-20 11:54:50.733968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7da8 00:27:17.731 [2024-11-20 11:54:50.734234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.731 [2024-11-20 11:54:50.734248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.731 [2024-11-20 11:54:50.741987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1710 00:27:17.731 [2024-11-20 11:54:50.742230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.731 [2024-11-20 11:54:50.742244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.731 [2024-11-20 11:54:50.749994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e4140 00:27:17.731 [2024-11-20 11:54:50.750213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.731 [2024-11-20 11:54:50.750228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.731 [2024-11-20 11:54:50.760134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea680 00:27:17.731 [2024-11-20 11:54:50.761121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.731 [2024-11-20 11:54:50.761200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.731 [2024-11-20 11:54:50.765996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ed0b0 00:27:17.731 [2024-11-20 11:54:50.766866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.731 [2024-11-20 11:54:50.766893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.775294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee5c8 00:27:17.998 [2024-11-20 11:54:50.776422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.776448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.783607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f20d8 00:27:17.998 [2024-11-20 11:54:50.784430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.784511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.791777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f9f68 00:27:17.998 [2024-11-20 11:54:50.792375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.792402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.799817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e73e0 00:27:17.998 [2024-11-20 11:54:50.800389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.800424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.807935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f57b0 00:27:17.998 [2024-11-20 11:54:50.808488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.808509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.815964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e01f8 00:27:17.998 [2024-11-20 11:54:50.816491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.816513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.824002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1ca0 00:27:17.998 [2024-11-20 11:54:50.824477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.824513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.832143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6738 00:27:17.998 [2024-11-20 11:54:50.832596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.832616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.839995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e01f8 00:27:17.998 [2024-11-20 11:54:50.841283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.998 [2024-11-20 11:54:50.841313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.998 [2024-11-20 11:54:50.848766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2d80 00:27:17.999 [2024-11-20 11:54:50.849246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.849275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.857258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e73e0 00:27:17.999 [2024-11-20 11:54:50.857767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.857794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.865471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ed4e8 00:27:17.999 [2024-11-20 11:54:50.866603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.866633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.873771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1868 00:27:17.999 [2024-11-20 11:54:50.874403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.874427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.882595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6300 00:27:17.999 [2024-11-20 11:54:50.882999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.883025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.891039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e9e10 00:27:17.999 [2024-11-20 11:54:50.891583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.891610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.898593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f5be8 00:27:17.999 [2024-11-20 11:54:50.899679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.899705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.907110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df550 00:27:17.999 [2024-11-20 11:54:50.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.907822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.915553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5658 00:27:17.999 [2024-11-20 11:54:50.916048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.916072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.923965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e6300 00:27:17.999 [2024-11-20 11:54:50.924623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.924650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.931077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f5378 00:27:17.999 [2024-11-20 11:54:50.931751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.931779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.939155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f9b30 00:27:17.999 [2024-11-20 11:54:50.939811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.939837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.946343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fa7d8 00:27:17.999 [2024-11-20 11:54:50.946440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.946455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.955142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e4578 00:27:17.999 [2024-11-20 11:54:50.955359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.955374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.963122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fdeb0 00:27:17.999 [2024-11-20 11:54:50.964099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.964129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.972595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f5378 00:27:17.999 [2024-11-20 11:54:50.973329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.973355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.979970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e4de8 00:27:17.999 [2024-11-20 11:54:50.980787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.980807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.987997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e27f0 00:27:17.999 [2024-11-20 11:54:50.989237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.989267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:50.996119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fef90 00:27:17.999 [2024-11-20 11:54:50.996807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:50.996832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:51.004159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f5be8 00:27:17.999 [2024-11-20 11:54:51.004918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:51.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:51.012285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e27f0 00:27:17.999 [2024-11-20 11:54:51.013710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:51.013791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:51.021179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fe720 00:27:17.999 [2024-11-20 11:54:51.021990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:51.022020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.999 [2024-11-20 11:54:51.029289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e49b0 00:27:17.999 [2024-11-20 11:54:51.029868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.999 [2024-11-20 11:54:51.029906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.037348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190de038 00:27:18.280 [2024-11-20 11:54:51.037938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.037962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.045465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f20d8 00:27:18.280 [2024-11-20 11:54:51.046054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.046082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.053569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e38d0 00:27:18.280 [2024-11-20 11:54:51.054128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.054155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.061574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eb328 00:27:18.280 [2024-11-20 11:54:51.062182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.062209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.069796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f35f0 00:27:18.280 [2024-11-20 11:54:51.070795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.070822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.078023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2510 00:27:18.280 [2024-11-20 11:54:51.078517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.078535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.088025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e95a0 00:27:18.280 [2024-11-20 11:54:51.089039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.089116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.094122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7970 00:27:18.280 [2024-11-20 11:54:51.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.094435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.102377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ef270 00:27:18.280 [2024-11-20 11:54:51.102737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.102752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.110821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eea00 00:27:18.280 [2024-11-20 11:54:51.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.118831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eff18 00:27:18.280 [2024-11-20 11:54:51.120095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.120181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.126988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1868 00:27:18.280 [2024-11-20 11:54:51.127653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.127746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.135796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e38d0 00:27:18.280 [2024-11-20 11:54:51.136446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.136476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.143694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e7c50 00:27:18.280 [2024-11-20 11:54:51.144641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.144680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.152366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1710 00:27:18.280 [2024-11-20 11:54:51.152937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.152964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.159508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0ff8 00:27:18.280 [2024-11-20 11:54:51.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.160480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.167639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0ff8 00:27:18.280 [2024-11-20 11:54:51.168415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.168438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.175669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0ff8 00:27:18.280 [2024-11-20 11:54:51.176583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.176612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.183899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0ff8 00:27:18.280 [2024-11-20 11:54:51.184770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.280 [2024-11-20 11:54:51.184797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:18.280 [2024-11-20 11:54:51.191871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f4b08 00:27:18.280 [2024-11-20 11:54:51.192162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.192177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.201793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f35f0 00:27:18.281 [2024-11-20 11:54:51.202589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.202616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.207778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ee5c8 00:27:18.281 [2024-11-20 11:54:51.207895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.207909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.217500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f5378 00:27:18.281 [2024-11-20 11:54:51.218755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.218834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.227243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7538 00:27:18.281 [2024-11-20 11:54:51.228288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.228313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.233185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df988 00:27:18.281 [2024-11-20 11:54:51.233324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.233340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.241549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e2c28 00:27:18.281 [2024-11-20 11:54:51.242229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.242252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.249685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fef90 00:27:18.281 [2024-11-20 11:54:51.250295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.250363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.258888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190df988 00:27:18.281 [2024-11-20 11:54:51.259520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.259546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.267170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fd640 00:27:18.281 [2024-11-20 11:54:51.267857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.267885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.275073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fcdd0 00:27:18.281 [2024-11-20 11:54:51.276289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.276319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.283177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e95a0 00:27:18.281 [2024-11-20 11:54:51.283974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.284003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.290551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1868 00:27:18.281 [2024-11-20 11:54:51.291030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.291052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.299883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190efae0 00:27:18.281 [2024-11-20 11:54:51.300756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.300779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.308101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f92c0 00:27:18.281 [2024-11-20 11:54:51.309391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.309471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:18.281 [2024-11-20 11:54:51.315113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fb8b8 00:27:18.281 [2024-11-20 11:54:51.315885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.281 [2024-11-20 11:54:51.315908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.324114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e2c28 00:27:18.542 [2024-11-20 11:54:51.324636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.324717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.332220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f8e88 00:27:18.542 [2024-11-20 11:54:51.332711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.332730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.340304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ebb98 00:27:18.542 [2024-11-20 11:54:51.340781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.340800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.348354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f7da8 00:27:18.542 [2024-11-20 11:54:51.348821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.348840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.356382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f6020 00:27:18.542 [2024-11-20 11:54:51.356878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.356901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.364437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f81e0 00:27:18.542 [2024-11-20 11:54:51.364974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.365002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.372633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f9b30 00:27:18.542 [2024-11-20 11:54:51.373410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.373439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.380641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f4298 00:27:18.542 [2024-11-20 11:54:51.381855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.381883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.390336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fa7d8 00:27:18.542 [2024-11-20 11:54:51.391326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.391350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.396393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e4578 00:27:18.542 [2024-11-20 11:54:51.396625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.396640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.405248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eee38 00:27:18.542 [2024-11-20 11:54:51.405582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.405597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.413327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eff18 00:27:18.542 [2024-11-20 11:54:51.413668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.413699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.421425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f96f8 00:27:18.542 [2024-11-20 11:54:51.422531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.422611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.429433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e1f80 00:27:18.542 [2024-11-20 11:54:51.429803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.429819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.438231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fb048 00:27:18.542 [2024-11-20 11:54:51.439272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.439302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.446488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f0350 00:27:18.542 [2024-11-20 11:54:51.447018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.447040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.456478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e3498 00:27:18.542 [2024-11-20 11:54:51.457540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.457564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.462613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f81e0 00:27:18.542 [2024-11-20 11:54:51.462934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.462949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.472690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f8a50 00:27:18.542 [2024-11-20 11:54:51.473476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.473551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.478788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fb048 00:27:18.542 [2024-11-20 11:54:51.478874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.478888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.488552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190ea248 00:27:18.542 [2024-11-20 11:54:51.489816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.489844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.496529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e2c28 00:27:18.542 [2024-11-20 11:54:51.496987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.497007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.504265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5658 00:27:18.542 [2024-11-20 11:54:51.505015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.505042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.511874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fc128 00:27:18.542 [2024-11-20 11:54:51.512621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.512710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:18.542 [2024-11-20 11:54:51.521288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190eb328 00:27:18.542 [2024-11-20 11:54:51.522562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.542 [2024-11-20 11:54:51.522592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.529249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fe720 00:27:18.543 [2024-11-20 11:54:51.529787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.543 [2024-11-20 11:54:51.529810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.536458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f6890 00:27:18.543 [2024-11-20 11:54:51.536638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.543 [2024-11-20 11:54:51.536674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.546628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f5be8 00:27:18.543 [2024-11-20 11:54:51.547338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.543 [2024-11-20 11:54:51.547425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.555272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f2d80 00:27:18.543 [2024-11-20 11:54:51.556373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.543 [2024-11-20 11:54:51.556469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.564210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e0a68 00:27:18.543 [2024-11-20 11:54:51.564972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.543 [2024-11-20 11:54:51.565061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.573064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190f1ca0 00:27:18.543 [2024-11-20 11:54:51.573507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.543 [2024-11-20 11:54:51.573588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:18.543 [2024-11-20 11:54:51.582151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e5220 00:27:18.802 [2024-11-20 11:54:51.582729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.802 [2024-11-20 11:54:51.582799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:18.802 [2024-11-20 11:54:51.589690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e84c0 00:27:18.802 [2024-11-20 11:54:51.590483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.802 [2024-11-20 11:54:51.590555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:18.802 [2024-11-20 11:54:51.598367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190fdeb0 00:27:18.802 [2024-11-20 11:54:51.598926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.802 [2024-11-20 11:54:51.599009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:18.802 [2024-11-20 11:54:51.606875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e99d8 00:27:18.802 [2024-11-20 11:54:51.607401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.802 [2024-11-20 11:54:51.607486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.802 [2024-11-20 11:54:51.614986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160b8f0) with pdu=0x2000190e7c50 00:27:18.802 [2024-11-20 11:54:51.615758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.802 [2024-11-20 11:54:51.615849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.802 00:27:18.802 Latency(us) 00:27:18.802 [2024-11-20T11:54:51.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.802 [2024-11-20T11:54:51.845Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.802 nvme0n1 : 2.00 30862.60 120.56 0.00 0.00 4142.80 1616.94 12821.02 00:27:18.802 [2024-11-20T11:54:51.845Z] =================================================================================================================== 00:27:18.802 [2024-11-20T11:54:51.845Z] Total : 30862.60 120.56 0.00 0.00 4142.80 1616.94 12821.02 00:27:18.802 0 00:27:18.802 11:54:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:18.802 11:54:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:18.802 | .driver_specific 00:27:18.802 | .nvme_error 00:27:18.802 | .status_code 00:27:18.802 | .command_transient_transport_error' 00:27:18.802 11:54:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:18.802 11:54:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:19.063 11:54:51 -- host/digest.sh@71 -- # (( 242 > 0 )) 00:27:19.063 11:54:51 -- host/digest.sh@73 -- # killprocess 87420 00:27:19.063 11:54:51 -- common/autotest_common.sh@936 -- # '[' -z 87420 ']' 00:27:19.063 11:54:51 -- common/autotest_common.sh@940 -- # kill -0 87420 00:27:19.063 11:54:51 -- common/autotest_common.sh@941 -- # uname 00:27:19.063 11:54:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:19.063 11:54:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87420 00:27:19.063 11:54:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:19.063 11:54:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:19.063 11:54:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87420' 00:27:19.063 killing process with pid 87420 00:27:19.063 11:54:51 -- common/autotest_common.sh@955 -- # kill 87420 00:27:19.063 Received shutdown signal, test time was about 2.000000 seconds 00:27:19.063 00:27:19.063 Latency(us) 00:27:19.063 [2024-11-20T11:54:52.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.063 [2024-11-20T11:54:52.106Z] =================================================================================================================== 00:27:19.063 [2024-11-20T11:54:52.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.063 11:54:51 -- common/autotest_common.sh@960 -- # wait 87420 00:27:19.323 11:54:52 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:27:19.323 11:54:52 -- host/digest.sh@54 -- # local rw bs qd 00:27:19.323 11:54:52 -- host/digest.sh@56 -- # rw=randwrite 00:27:19.323 11:54:52 -- host/digest.sh@56 -- # bs=131072 00:27:19.323 11:54:52 -- host/digest.sh@56 -- # qd=16 00:27:19.323 11:54:52 -- host/digest.sh@58 -- # bperfpid=87507 00:27:19.323 11:54:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:19.323 11:54:52 -- host/digest.sh@60 -- # waitforlisten 87507 /var/tmp/bperf.sock 00:27:19.323 11:54:52 -- common/autotest_common.sh@829 -- # '[' -z 87507 ']' 00:27:19.323 11:54:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:19.323 11:54:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:19.323 11:54:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:19.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:19.323 11:54:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:19.323 11:54:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.323 [2024-11-20 11:54:52.167013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:19.323 [2024-11-20 11:54:52.167524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87507 ] 00:27:19.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.323 Zero copy mechanism will not be used. 00:27:19.323 [2024-11-20 11:54:52.304875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.582 [2024-11-20 11:54:52.382448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.152 11:54:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.152 11:54:53 -- common/autotest_common.sh@862 -- # return 0 00:27:20.152 11:54:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:20.152 11:54:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:20.152 11:54:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:20.152 11:54:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.152 11:54:53 -- common/autotest_common.sh@10 -- # set +x 00:27:20.412 11:54:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.412 11:54:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.412 11:54:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.412 nvme0n1 00:27:20.673 11:54:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:20.673 11:54:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.673 11:54:53 -- common/autotest_common.sh@10 -- # set +x 00:27:20.673 11:54:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.673 11:54:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:20.673 11:54:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:20.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:20.673 Zero copy mechanism will not be used. 00:27:20.673 Running I/O for 2 seconds... 00:27:20.673 [2024-11-20 11:54:53.567515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.673 [2024-11-20 11:54:53.567992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.568029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.570632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.570920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.570950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.573535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.573746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.573772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.576498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.576647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.576665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.579220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.579318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.579335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.582007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.582201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.582217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.584784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.585085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.585112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.587510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.587639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.587655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.590305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.590507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.593088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.593334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.593349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.595762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.595868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.595884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.598581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.598779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.598795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.601343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.601563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.601579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.604096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.604201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.604217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.606907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.607143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.607159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.609613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.609811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.609826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.612268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.612475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.612491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.615061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.615246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.615261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.617865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.618127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.618159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.620522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.620665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.620681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.623285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.623445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.626036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.626220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.628825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.628952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.628981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.631562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.631762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.631778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.634281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.634465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.634480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.637018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.637107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.637122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.639829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.674 [2024-11-20 11:54:53.640000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.674 [2024-11-20 11:54:53.640016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.674 [2024-11-20 11:54:53.642558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.642697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.642713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.645254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.645355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.645371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.648084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.648204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.648220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.650840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.650967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.650983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.653625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.653806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.653822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.656434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.656553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.656568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.659209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.659374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.659389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.661942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.662098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.662113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.664723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.664892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.664907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.667456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.667594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.667610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.670217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.670336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.670352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.673068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.673235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.673251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.675828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.675945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.675961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.678522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.678703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.678718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.681266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.681420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.681435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.684030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.684147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.684163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.686759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.686921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.686936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.689519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.689667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.689684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.692267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.692472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.692487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.695037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.695214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.695229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.697764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.697883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.697899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.700583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.700789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.700805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.703310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.703473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.703488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.706015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.706165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.706183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.708857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.709042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.709058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.675 [2024-11-20 11:54:53.711572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.675 [2024-11-20 11:54:53.711791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.675 [2024-11-20 11:54:53.711807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.714349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.714522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.714537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.717122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.717272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.717287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.719859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.719994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.720009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.722649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.722819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.722834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.725346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.725546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.725562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.728123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.728282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.730818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.730933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.730949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.733500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.733589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.938 [2024-11-20 11:54:53.733605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.938 [2024-11-20 11:54:53.736309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.938 [2024-11-20 11:54:53.736480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.736496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.739047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.739169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.739184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.741844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.741991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.742007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.744601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.744751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.744767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.747316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.747483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.747498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.750074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.750240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.750256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.752839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.752970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.752986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.755522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.755655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.755697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.758284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.758425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.758440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.761098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.761287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.761304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.763860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.763993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.764009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.766561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.766774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.766789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.769336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.769492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.772137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.772235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.772250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.774887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.775025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.775040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.777657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.777766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.777782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.780445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.780613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.780629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.783190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.783353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.783369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.785976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.786148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.788783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.788991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.791440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.791644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.791659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.794256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.794439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.794454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.797063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.797233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.797248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.799842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.799971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.799986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.802590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.802777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.939 [2024-11-20 11:54:53.802794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.939 [2024-11-20 11:54:53.805356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.939 [2024-11-20 11:54:53.805508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.805525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.808143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.808313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.808329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.810869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.811003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.811019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.813608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.813782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.813798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.816441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.816571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.816587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.819129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.819303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.819319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.821919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.822107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.822122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.824630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.824808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.824824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.827395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.827520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.830217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.830380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.830395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.833061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.833209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.833224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.835819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.835995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.836010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.838483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.838656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.838702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.841261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.841390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.841406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.844107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.844278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.844294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.846831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.846959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.846975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.849553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.849731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.849747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.852296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.852461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.852476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.854958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.855124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.855140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.857806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.857971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.857987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.860594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.860738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.860754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.863348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.863534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.863550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.866056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.866210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.868777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.868909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.868925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.871585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.871746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.871762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.940 [2024-11-20 11:54:53.874251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.940 [2024-11-20 11:54:53.874370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.940 [2024-11-20 11:54:53.874385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.877040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.877133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.877149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.879804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.879910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.882471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.882549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.882565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.885314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.885465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.885481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.888081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.888199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.888215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.890827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.890980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.893576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.893691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.893718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.896369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.896547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.896562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.899111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.899252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.899268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.901871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.901993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.902009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.904692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.904852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.904868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.907385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.907564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.907580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.910110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.910297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.910313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.912877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.913005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.913033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.915631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.915759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.915775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.918452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.918631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.918647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.921217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.921363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.921378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.924018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.924151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.924167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.926826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.926967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.926983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.929597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.929692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.929708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.932489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.932695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.932712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.935324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.935447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.935463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.938161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.938295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.938312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.941028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.941181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.941197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.944352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.944521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.941 [2024-11-20 11:54:53.944536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.941 [2024-11-20 11:54:53.947191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.941 [2024-11-20 11:54:53.947342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.947359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.950118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.950299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.950327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.952991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.953136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.953152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.955866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.956068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.958636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.959029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.959060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.961517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.961628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.961644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.964397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.964547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.964563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.967325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.967480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.967505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.970095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.970216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.970231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.972862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.973005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.973020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.942 [2024-11-20 11:54:53.975567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:20.942 [2024-11-20 11:54:53.975673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.942 [2024-11-20 11:54:53.975690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.978333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.978500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.981106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.981264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.981279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.983824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.983920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.983936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.986615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.986830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.986846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.989391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.989543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.989560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.992194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.992367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.992382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.994938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.995105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.995121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:53.997742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:53.997851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:53.997866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.000571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.000764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.000780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.003297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.003465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.003480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.006074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.006188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.006203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.008871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.009021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.009036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.011550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.011759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.011775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.014301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.014486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.014502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.017063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.017193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.017209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.019870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.019981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.019996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.022598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.022795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.022811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.025395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.025539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.025554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.028127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.028271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.028287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.030958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.031115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.031131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.033733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.033865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.033880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.036509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.036681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.036697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.039236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.039399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.039414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.206 [2024-11-20 11:54:54.042084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.206 [2024-11-20 11:54:54.042284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.206 [2024-11-20 11:54:54.042299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.044879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.045031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.045057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.047552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.047700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.047717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.050290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.050427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.050442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.053103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.053270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.053286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.055844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.055942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.055959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.058564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.058754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.058770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.061332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.061497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.061513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.064153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.064267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.064283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.066954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.067141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.067156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.069768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.069889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.069904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.072503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.072694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.072712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.075252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.075419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.075435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.078084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.078274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.078289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.080859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.080990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.081012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.083575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.083678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.083695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.086371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.086537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.086552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.089132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.089261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.089277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.091861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.091991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.092012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.094622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.094800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.094816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.097343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.097490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.097506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.100115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.100193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.100209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.102966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.103098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.103114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.105700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.105865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.105881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.108474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.108563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.108578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.111269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.111455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.207 [2024-11-20 11:54:54.111470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.207 [2024-11-20 11:54:54.113969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.207 [2024-11-20 11:54:54.114038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.114053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.116796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.116892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.116909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.119569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.119743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.119759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.122363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.122547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.122562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.125161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.125301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.125317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.127943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.128059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.128075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.130704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.130880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.130896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.133434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.133582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.133597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.136260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.136387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.136403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.139137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.139309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.139325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.141880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.144719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.144875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.147777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.147921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.147938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.150509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.150650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.150665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.153308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.153477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.153493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.156047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.156253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.158752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.158824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.158840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.161501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.161648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.161664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.164280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.164415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.164430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.167090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.167250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.167266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.169802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.169965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.169980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.172628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.172822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.172840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.175306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.175470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.175485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.178043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.178169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.178185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.180851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.181036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.181052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.183549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.183713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.183728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.208 [2024-11-20 11:54:54.186316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.208 [2024-11-20 11:54:54.186480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.208 [2024-11-20 11:54:54.186495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.189090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.189223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.189239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.191798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.191979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.191995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.194588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.194732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.194748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.197339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.197451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.197468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.200106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.200208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.202893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.203036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.203051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.205624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.205748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.205763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.208413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.208594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.208609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.211099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.211240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.211256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.213829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.213954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.213981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.216720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.216873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.216889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.219363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.219514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.219529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.222121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.222291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.222307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.224878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.225032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.225054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.227555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.227707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.227723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.230326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.230502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.230518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.233103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.233218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.233234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.235931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.236127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.236142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.238661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.238809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.238824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.209 [2024-11-20 11:54:54.241412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.209 [2024-11-20 11:54:54.241551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.209 [2024-11-20 11:54:54.241566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.244245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.244423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.244439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.246921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.247071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.247088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.249708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.249866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.249894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.252534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.252722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.255255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.255384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.255399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.258048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.258214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.258230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.260793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.260942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.260957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.263481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.263621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.263637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.266226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.266409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.266424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.269027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.269191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.269207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.271766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.271951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.274567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.274735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.274751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.277313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.277497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.277513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.280110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.280294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.280309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.282847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.282993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.283008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.285611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.285817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.285833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.288408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.288577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.288592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.291094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.291237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.291252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.293870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.294042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.294057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.296590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.296777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.296794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.299252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.299368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.473 [2024-11-20 11:54:54.299383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.473 [2024-11-20 11:54:54.302024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.473 [2024-11-20 11:54:54.302157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.302173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.304813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.304901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.304916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.307467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.307546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.307561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.310257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.310405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.310423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.312964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.313083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.313098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.315728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.315900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.315916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.318490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.318605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.318620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.321324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.321498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.321513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.324030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.324203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.324219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.326780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.326934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.329555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.329715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.329731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.332288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.332400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.332415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.335073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.335255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.335270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.337792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.337957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.337973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.340568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.340662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.340691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.343339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.343496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.343512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.346109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.346267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.346282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.348943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.349139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.349154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.351667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.351900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.351916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.354445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.354550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.354566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.357269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.357455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.357471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.360079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.360212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.360228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.362811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.362952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.362968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.365598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.365792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.365807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.368397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.368541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.368557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.371187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.371354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.371369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.373931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.374090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.474 [2024-11-20 11:54:54.374107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.474 [2024-11-20 11:54:54.376690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.474 [2024-11-20 11:54:54.376835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.376850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.379422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.379577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.379593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.382153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.382304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.382320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.384912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.385077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.385092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.387634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.387827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.387843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.390373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.390468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.390484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.393182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.393354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.393369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.395969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.396120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.396136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.398701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.398827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.398843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.401507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.401717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.404250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.404399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.404415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.407019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.407203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.407219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.409749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.409891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.409906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.412450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.412584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.412599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.415239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.415404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.415419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.418001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.418171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.418186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.420820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.421001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.421018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.423529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.423698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.423716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.426299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.426436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.426452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.429136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.429298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.429315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.431845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.432029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.432044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.434616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.434798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.434813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.437375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.437537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.437553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.440160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.440276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.440292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.442919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.443124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.443139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.445647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.445858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.445873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.448445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.448632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.475 [2024-11-20 11:54:54.448647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.475 [2024-11-20 11:54:54.451203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.475 [2024-11-20 11:54:54.451347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.451363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.453947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.454058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.454075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.456704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.456806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.456823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.459445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.459551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.459567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.462256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.462451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.462469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.465057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.465189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.465204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.467799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.467947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.467963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.470614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.470773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.470791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.473291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.473500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.473515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.476090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.476256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.476271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.478847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.479044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.479059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.481551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.481657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.484405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.484591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.484607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.487176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.487327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.487342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.489941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.490131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.490147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.492711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.492835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.492851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.495416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.495607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.495623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.498183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.498360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.500891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.501035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.501050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.503621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.503775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.503797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.506442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.506582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.506598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.476 [2024-11-20 11:54:54.509130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.476 [2024-11-20 11:54:54.509324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.476 [2024-11-20 11:54:54.509339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.511940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.512102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.512118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.514679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.514829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.514844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.517437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.517581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.517596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.520231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.520398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.520414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.522958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.523123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.523138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.525751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.525946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.525962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.528475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.528600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.528616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.531192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.531343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.531358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.533974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.534149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.534164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.536758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.536908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.536923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.539476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.539664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.539689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.542232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.542380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.542396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.544986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.545095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.545111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.547716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.547965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.547980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.550452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.550620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.550636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.553222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.553348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.553363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.740 [2024-11-20 11:54:54.556062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.740 [2024-11-20 11:54:54.556233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.740 [2024-11-20 11:54:54.556249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.558786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.558956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.558971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.561601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.561830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.561846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.564374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.564589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.564611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.567154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.567245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.567261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.569991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.570181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.570197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.572787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.572959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.572975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.575486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.575583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.575598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.578284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.578436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.578461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.581075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.581193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.581209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.583829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.583992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.584008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.586575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.586743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.586758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.589292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.589431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.589446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.592113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.592274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.592292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.594850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.594992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.595007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.597565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.597768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.597784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.600284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.600433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.600448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.603034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.603167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.603183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.605859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.606015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.606032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.608656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.608752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.608769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.611408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.611568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.611583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.614105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.614261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.614278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.616828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.616940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.616955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.619610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.619781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.619805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.622365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.622509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.622525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.625209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.741 [2024-11-20 11:54:54.625368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.741 [2024-11-20 11:54:54.625384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.741 [2024-11-20 11:54:54.627956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.628101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.628117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.630657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.630813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.630829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.633461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.633623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.633638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.636234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.636368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.638956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.639128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.639144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.641716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.641850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.641866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.644442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.644551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.644566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.647257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.647443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.647458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.649966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.650139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.650155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.652711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.652872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.652887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.655465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.655634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.655649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.658181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.658324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.658340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.660976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.661171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.661186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.663755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.663938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.663954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.666550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.666661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.666690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.669280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.669469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.669484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.672101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.672223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.672239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.674784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.674961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.674977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.677587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.677747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.677764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.680329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.680479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.680495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.683119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.683280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.683296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.685861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.686044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.686060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.688593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.688738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.688754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.691390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.691548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.691565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.694049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.742 [2024-11-20 11:54:54.694210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.742 [2024-11-20 11:54:54.694227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.742 [2024-11-20 11:54:54.696822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.696978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.697005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.699550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.699748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.699763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.702243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.702382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.702397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.705076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.705229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.705244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.707782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.707952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.707967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.710550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.710725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.710741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.713301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.713516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.713531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.716091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.716266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.716282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.718798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.718963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.718978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.721603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.721782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.721798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.724431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.724584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.724600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.727167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.727260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.727276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.729935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.730107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.732664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.732803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.732819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.735336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.735534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.735549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.738136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.738312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.738327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.740924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.741071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.741087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.743647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.743770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.743792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.746473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.746612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.746628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.749216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.749432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.749448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.751997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.752084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.752100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.754732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.754874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.754890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.757502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.757642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.760343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.760497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.760513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.763038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.743 [2024-11-20 11:54:54.763162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.743 [2024-11-20 11:54:54.763178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.743 [2024-11-20 11:54:54.765837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.744 [2024-11-20 11:54:54.766009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.744 [2024-11-20 11:54:54.766024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.744 [2024-11-20 11:54:54.768664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.744 [2024-11-20 11:54:54.768798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.744 [2024-11-20 11:54:54.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.744 [2024-11-20 11:54:54.771353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.744 [2024-11-20 11:54:54.771473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.744 [2024-11-20 11:54:54.771488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.744 [2024-11-20 11:54:54.774183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.744 [2024-11-20 11:54:54.774350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.744 [2024-11-20 11:54:54.774366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.744 [2024-11-20 11:54:54.776900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:21.744 [2024-11-20 11:54:54.777019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.744 [2024-11-20 11:54:54.777035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.779589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.779796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.779828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.782363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.782535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.782550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.785173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.785270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.785286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.787957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.788134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.788150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.790645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.790794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.790810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.793341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.793481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.793496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.796162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.796319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.796335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.798878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.799007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.799022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.801628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.801826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.801841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.804452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.804591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.804606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.807176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.807343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.807358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.809949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.810116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.810132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.812726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.812836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.815479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.815660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.815688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.818241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.818395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.818410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.821073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.821196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.821211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.823837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.824015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.824031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.826573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.826747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.826762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.829311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.829461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.829476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.832155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.832323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.832339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.834845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.835051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.006 [2024-11-20 11:54:54.837609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.006 [2024-11-20 11:54:54.837831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.006 [2024-11-20 11:54:54.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.840407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.840533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.840549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.843147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.843297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.843313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.845914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.846077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.846093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.848721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.848852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.848867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.851417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.851578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.854196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.854364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.854379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.857000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.857153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.857169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.859714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.859882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.859897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.862446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.862604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.862619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.865282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.865455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.865470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.868056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.868176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.868192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.870781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.870883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.870899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.873560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.873737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.873752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.876369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.876492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.876508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.879099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.881887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.882051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.882066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.884663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.884813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.884828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.887398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.887556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.887571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.890161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.890306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.890322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.892913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.893064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.893079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.895639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.895843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.895858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.898428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.898575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.898590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.901150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.901304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.901320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.903900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.904054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.904070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.906580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.007 [2024-11-20 11:54:54.906687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.007 [2024-11-20 11:54:54.906703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.007 [2024-11-20 11:54:54.909411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.909600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.909616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.912181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.912324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.912340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.914896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.915029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.915045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.917645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.917797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.917813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.920369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.920510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.920526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.923139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.923310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.923325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.925918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.926068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.926086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.928675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.928792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.928808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.931398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.931546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.931562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.934145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.934290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.934305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.936940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.937122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.937138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.939649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.939855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.939871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.942396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.942542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.942557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.945156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.945332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.945347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.947905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.948093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.948108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.950533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.950616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.950632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.953335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.953446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.953462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.956183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.956303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.956319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.958986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.959152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.959167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.961797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.961976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.961991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.964666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.964804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.964820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.967581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.967753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.970383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.970552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.970568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.973265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.973439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.008 [2024-11-20 11:54:54.973469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.008 [2024-11-20 11:54:54.976127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.008 [2024-11-20 11:54:54.976295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.976321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.978987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.979137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.979164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.981922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.982098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.984693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.984871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.987479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.987664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.987684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.990219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.990357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.990375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.992962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.993080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.993096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.995681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.995897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.995912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:54.998442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:54.998603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:54.998618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.001165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.001292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.001307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.003989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.004151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.006750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.006891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.006906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.009550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.009730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.009746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.012288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.012416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.012432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.015001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.015125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.015140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.017815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.017987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.018002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.020560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.020756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.020772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.023272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.023417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.023432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.026002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.026148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.026164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.028690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.028782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.031409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.031571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.031587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.034156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.034286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.034302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.036980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.037136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.037152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.039698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.039861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.039876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.009 [2024-11-20 11:54:55.042498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.009 [2024-11-20 11:54:55.042695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.009 [2024-11-20 11:54:55.042710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.271 [2024-11-20 11:54:55.045214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.271 [2024-11-20 11:54:55.045377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.271 [2024-11-20 11:54:55.045393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.271 [2024-11-20 11:54:55.048006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.271 [2024-11-20 11:54:55.048132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.271 [2024-11-20 11:54:55.048148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.271 [2024-11-20 11:54:55.050673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.271 [2024-11-20 11:54:55.050825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.271 [2024-11-20 11:54:55.050841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.271 [2024-11-20 11:54:55.053406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.053544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.053560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.056169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.056329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.056344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.058931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.059090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.059105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.061689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.061864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.061880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.064539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.064749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.064764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.067325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.067483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.067500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.070061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.070203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.070219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.072876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.073035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.073056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.075539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.075690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.075707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.078282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.078415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.078430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.081082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.081265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.081282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.083841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.083967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.083983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.086562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.086777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.089312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.089458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.089474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.092024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.092168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.092184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.094801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.094956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.094972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.097496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.097692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.097708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.100270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.100426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.100442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.102982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.103147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.103163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.105728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.105894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.105910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.108592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.108767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.108783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.111254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.111451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.111466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.114002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.114187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.114203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.116795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.116911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.116926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.119534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.119661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.119699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.122357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.122539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.122555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.125127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.272 [2024-11-20 11:54:55.125274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.272 [2024-11-20 11:54:55.125289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.272 [2024-11-20 11:54:55.127897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.128071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.128087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.130570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.130742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.130758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.133335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.133475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.133490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.136127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.136321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.136347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.138988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.139139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.139154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.141739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.141911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.141927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.144426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.144567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.144582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.147152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.147238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.147254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.149891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.150055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.150071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.152632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.152800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.155391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.155516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.155532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.158205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.158343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.158359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.160918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.161075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.163683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.163865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.163882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.166437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.166598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.166613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.169210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.169302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.169318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.172009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.172168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.172183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.174675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.174854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.174870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.177434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.177566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.180258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.180451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.182983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.183154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.183169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.185832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.186008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.186024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.188599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.188741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.188757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.191331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.191471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.191487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.194047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.194207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.194223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.196724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.196886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.196902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.273 [2024-11-20 11:54:55.199455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.273 [2024-11-20 11:54:55.199608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.273 [2024-11-20 11:54:55.199624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.202187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.202286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.202302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.204967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.205045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.205061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.207685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.207825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.207841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.210440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.210609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.210625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.213150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.213296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.215948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.216081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.216097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.218660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.218814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.218829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.221444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.221618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.221633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.224232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.224407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.224422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.226894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.227001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.227017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.229695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.229843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.229858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.232468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.232588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.232604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.235222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.235384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.235399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.238024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.238152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.238168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.240795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.240976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.243485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.243623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.243639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.246227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.246326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.246341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.249132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.249317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.249333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.251862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.252031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.252048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.254576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.254766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.257375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.257515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.257531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.260167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.260275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.260291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.262959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.263102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.265631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.265782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.265798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.268426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.268602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.268618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.271092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.271249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.271266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.273803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.274 [2024-11-20 11:54:55.273931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.274 [2024-11-20 11:54:55.273947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.274 [2024-11-20 11:54:55.276594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.276783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.276799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.279309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.279468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.279483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.282068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.282259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.282274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.284813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.284940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.284956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.287538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.287729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.287762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.290264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.290352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.290368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.293096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.293228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.293244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.295807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.295981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.295996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.298614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.298790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.298806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.301355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.301498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.301514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.304094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.304266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.304281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.306779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.306947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.306962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.275 [2024-11-20 11:54:55.309543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.275 [2024-11-20 11:54:55.309675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.275 [2024-11-20 11:54:55.309703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.312332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.312500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.312516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.315026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.315142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.315157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.317813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.317989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.318005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.320614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.320769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.320786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.323323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.323444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.323459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.326129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.326319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.326334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.328901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.329029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.329045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.331602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.331802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.331834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.334360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.334503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.334518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.337161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.337294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.337310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.339995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.340146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.340161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.342670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.342826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.345447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.345618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.345633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.348179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.348308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.348324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.350908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.351026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.351041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.353625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.353818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.353834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.356356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.356477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.356493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.359006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.359152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.359167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.361799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.361978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.361994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.537 [2024-11-20 11:54:55.364598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.537 [2024-11-20 11:54:55.364718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.537 [2024-11-20 11:54:55.364734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.367347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.367508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.367523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.370144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.370276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.370291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.372837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.372953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.372969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.375552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.375742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.375757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.378300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.378401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.378416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.381096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.381280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.381295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.383808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.383965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.383981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.386548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.386670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.386698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.389372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.389536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.389551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.392173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.392320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.392337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.394938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.395097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.395114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.397701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.397864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.397879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.400500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.400588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.400603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.403260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.403430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.405979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.406122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.406138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.408748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.408920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.408935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.411453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.411602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.411618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.414198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.414341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.414357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.417022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.417208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.419704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.419883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.419898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.422467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.422637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.422653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.425253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.425401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.425417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.428020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.428118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.428135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.538 [2024-11-20 11:54:55.430773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.538 [2024-11-20 11:54:55.430938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.538 [2024-11-20 11:54:55.430953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.433546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.433700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.433716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.436317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.436436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.436452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.439115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.439296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.439311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.441822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.441977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.441993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.444595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.444740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.444756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.447356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.447519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.447534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.450109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.450298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.452887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.453066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.453096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.455583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.455754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.458317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.458487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.458502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.461128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.461296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.461313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.463895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.464049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.464065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.466643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.466826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.469416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.469555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.469570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.472216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.472338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.472355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.474951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.475141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.475156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.477707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.477859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.477875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.480486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.480590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.480606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.483315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.483476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.483491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.486046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.486173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.486189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.488847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.489015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.489031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.491563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.491725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.491741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.539 [2024-11-20 11:54:55.494276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.539 [2024-11-20 11:54:55.494362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.539 [2024-11-20 11:54:55.494378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.497178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.497339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.497354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.499963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.500047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.500063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.502707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.502868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.502883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.505470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.505626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.505644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.508259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.508385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.508401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.511012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.511186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.511202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.513777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.513920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.513948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.516602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.516772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.516787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.519313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.519476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.519491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.522102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.522227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.525086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.525267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.525286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.527947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.528098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.528113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.530678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.530840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.530856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.533455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.533623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.533638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.536223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.536346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.536362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.538999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.539159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.539174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.541711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.541852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.541868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.544480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.544624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.544640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.547250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.547396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.547411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.549963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.550188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.550203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.552736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.552804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.552820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.555459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.555618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.540 [2024-11-20 11:54:55.558196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160bc30) with pdu=0x2000190fef90 00:27:22.540 [2024-11-20 11:54:55.558370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.540 [2024-11-20 11:54:55.558385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.540 00:27:22.540 Latency(us) 00:27:22.540 [2024-11-20T11:54:55.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.540 [2024-11-20T11:54:55.583Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:22.540 nvme0n1 : 2.00 11182.91 1397.86 0.00 0.00 1427.91 1116.12 3319.73 00:27:22.541 [2024-11-20T11:54:55.584Z] =================================================================================================================== 00:27:22.541 [2024-11-20T11:54:55.584Z] Total : 11182.91 1397.86 0.00 0.00 1427.91 1116.12 3319.73 00:27:22.541 0 00:27:22.800 11:54:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:22.800 11:54:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:22.800 11:54:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:22.800 | .driver_specific 00:27:22.800 | .nvme_error 00:27:22.800 | .status_code 00:27:22.800 | .command_transient_transport_error' 00:27:22.800 11:54:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:22.800 11:54:55 -- host/digest.sh@71 -- # (( 721 > 0 )) 00:27:22.800 11:54:55 -- host/digest.sh@73 -- # killprocess 87507 00:27:22.800 11:54:55 -- common/autotest_common.sh@936 -- # '[' -z 87507 ']' 00:27:22.800 11:54:55 -- common/autotest_common.sh@940 -- # kill -0 87507 00:27:22.800 11:54:55 -- common/autotest_common.sh@941 -- # uname 00:27:22.800 11:54:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:22.800 11:54:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87507 00:27:22.800 killing process with pid 87507 00:27:22.800 Received shutdown signal, test time was about 2.000000 seconds 00:27:22.800 00:27:22.800 Latency(us) 00:27:22.800 [2024-11-20T11:54:55.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.800 [2024-11-20T11:54:55.843Z] =================================================================================================================== 00:27:22.800 [2024-11-20T11:54:55.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.800 11:54:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:22.800 11:54:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:22.800 11:54:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87507' 00:27:22.800 11:54:55 -- common/autotest_common.sh@955 -- # kill 87507 00:27:22.800 11:54:55 -- common/autotest_common.sh@960 -- # wait 87507 00:27:23.059 11:54:56 -- host/digest.sh@115 -- # killprocess 87202 00:27:23.059 11:54:56 -- common/autotest_common.sh@936 -- # '[' -z 87202 ']' 00:27:23.059 11:54:56 -- common/autotest_common.sh@940 -- # kill -0 87202 00:27:23.060 11:54:56 -- common/autotest_common.sh@941 -- # uname 00:27:23.060 11:54:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:23.060 11:54:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87202 00:27:23.060 killing process with pid 87202 00:27:23.060 11:54:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:23.060 11:54:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:23.060 11:54:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87202' 00:27:23.060 11:54:56 -- common/autotest_common.sh@955 -- # kill 87202 00:27:23.060 11:54:56 -- common/autotest_common.sh@960 -- # wait 87202 00:27:23.319 00:27:23.319 real 0m17.148s 00:27:23.319 user 0m31.681s 00:27:23.319 sys 0m4.629s 00:27:23.319 11:54:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:23.319 11:54:56 -- common/autotest_common.sh@10 -- # set +x 00:27:23.319 ************************************ 00:27:23.319 END TEST nvmf_digest_error 00:27:23.319 ************************************ 00:27:23.319 11:54:56 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:27:23.319 11:54:56 -- host/digest.sh@139 -- # nvmftestfini 00:27:23.320 11:54:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:23.320 11:54:56 -- nvmf/common.sh@116 -- # sync 00:27:23.579 11:54:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:23.579 11:54:56 -- nvmf/common.sh@119 -- # set +e 00:27:23.579 11:54:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:23.579 11:54:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:23.579 rmmod nvme_tcp 00:27:23.579 rmmod nvme_fabrics 00:27:23.579 rmmod nvme_keyring 00:27:23.579 11:54:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:23.579 11:54:56 -- nvmf/common.sh@123 -- # set -e 00:27:23.579 11:54:56 -- nvmf/common.sh@124 -- # return 0 00:27:23.579 11:54:56 -- nvmf/common.sh@477 -- # '[' -n 87202 ']' 00:27:23.579 11:54:56 -- nvmf/common.sh@478 -- # killprocess 87202 00:27:23.579 11:54:56 -- common/autotest_common.sh@936 -- # '[' -z 87202 ']' 00:27:23.579 11:54:56 -- common/autotest_common.sh@940 -- # kill -0 87202 00:27:23.579 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87202) - No such process 00:27:23.579 Process with pid 87202 is not found 00:27:23.579 11:54:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87202 is not found' 00:27:23.579 11:54:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:23.579 11:54:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:23.579 11:54:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:23.579 11:54:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.579 11:54:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:23.579 11:54:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.580 11:54:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.580 11:54:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.580 11:54:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:23.580 ************************************ 00:27:23.580 END TEST nvmf_digest 00:27:23.580 ************************************ 00:27:23.580 00:27:23.580 real 0m35.310s 00:27:23.580 user 1m3.901s 00:27:23.580 sys 0m9.537s 00:27:23.580 11:54:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:23.580 11:54:56 -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 11:54:56 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:27:23.580 11:54:56 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:27:23.580 11:54:56 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:23.580 11:54:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:23.580 11:54:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:23.580 11:54:56 -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 ************************************ 00:27:23.580 START TEST nvmf_mdns_discovery 00:27:23.580 ************************************ 00:27:23.580 11:54:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:23.840 * Looking for test storage... 00:27:23.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:23.840 11:54:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:23.840 11:54:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:23.840 11:54:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:23.840 11:54:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:23.840 11:54:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:23.840 11:54:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:23.840 11:54:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:23.840 11:54:56 -- scripts/common.sh@335 -- # IFS=.-: 00:27:23.840 11:54:56 -- scripts/common.sh@335 -- # read -ra ver1 00:27:23.840 11:54:56 -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.840 11:54:56 -- scripts/common.sh@336 -- # read -ra ver2 00:27:23.840 11:54:56 -- scripts/common.sh@337 -- # local 'op=<' 00:27:23.840 11:54:56 -- scripts/common.sh@339 -- # ver1_l=2 00:27:23.840 11:54:56 -- scripts/common.sh@340 -- # ver2_l=1 00:27:23.840 11:54:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:23.840 11:54:56 -- scripts/common.sh@343 -- # case "$op" in 00:27:23.840 11:54:56 -- scripts/common.sh@344 -- # : 1 00:27:23.840 11:54:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:23.840 11:54:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.840 11:54:56 -- scripts/common.sh@364 -- # decimal 1 00:27:23.840 11:54:56 -- scripts/common.sh@352 -- # local d=1 00:27:23.840 11:54:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.840 11:54:56 -- scripts/common.sh@354 -- # echo 1 00:27:23.840 11:54:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:23.840 11:54:56 -- scripts/common.sh@365 -- # decimal 2 00:27:23.840 11:54:56 -- scripts/common.sh@352 -- # local d=2 00:27:23.840 11:54:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.840 11:54:56 -- scripts/common.sh@354 -- # echo 2 00:27:23.840 11:54:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:23.840 11:54:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:23.840 11:54:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:23.840 11:54:56 -- scripts/common.sh@367 -- # return 0 00:27:23.840 11:54:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.840 11:54:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:23.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.840 --rc genhtml_branch_coverage=1 00:27:23.840 --rc genhtml_function_coverage=1 00:27:23.840 --rc genhtml_legend=1 00:27:23.840 --rc geninfo_all_blocks=1 00:27:23.840 --rc geninfo_unexecuted_blocks=1 00:27:23.840 00:27:23.840 ' 00:27:23.840 11:54:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:23.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.840 --rc genhtml_branch_coverage=1 00:27:23.840 --rc genhtml_function_coverage=1 00:27:23.840 --rc genhtml_legend=1 00:27:23.840 --rc geninfo_all_blocks=1 00:27:23.840 --rc geninfo_unexecuted_blocks=1 00:27:23.840 00:27:23.840 ' 00:27:23.840 11:54:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:23.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.840 --rc genhtml_branch_coverage=1 00:27:23.840 --rc genhtml_function_coverage=1 00:27:23.840 --rc genhtml_legend=1 00:27:23.840 --rc geninfo_all_blocks=1 00:27:23.840 --rc geninfo_unexecuted_blocks=1 00:27:23.840 00:27:23.840 ' 00:27:23.840 11:54:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:23.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.840 --rc genhtml_branch_coverage=1 00:27:23.840 --rc genhtml_function_coverage=1 00:27:23.840 --rc genhtml_legend=1 00:27:23.840 --rc geninfo_all_blocks=1 00:27:23.840 --rc geninfo_unexecuted_blocks=1 00:27:23.840 00:27:23.840 ' 00:27:23.840 11:54:56 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:23.840 11:54:56 -- nvmf/common.sh@7 -- # uname -s 00:27:23.840 11:54:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.840 11:54:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.840 11:54:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.840 11:54:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.840 11:54:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.840 11:54:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.840 11:54:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.840 11:54:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.840 11:54:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.840 11:54:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.840 11:54:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:27:23.840 11:54:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:27:23.840 11:54:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.840 11:54:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.840 11:54:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:23.840 11:54:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:23.840 11:54:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.840 11:54:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.840 11:54:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.840 11:54:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.840 11:54:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.840 11:54:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.840 11:54:56 -- paths/export.sh@5 -- # export PATH 00:27:23.840 11:54:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.840 11:54:56 -- nvmf/common.sh@46 -- # : 0 00:27:23.840 11:54:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:23.840 11:54:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:23.840 11:54:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:23.841 11:54:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.841 11:54:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.841 11:54:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:23.841 11:54:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:23.841 11:54:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:27:23.841 11:54:56 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:27:23.841 11:54:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:23.841 11:54:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.841 11:54:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:23.841 11:54:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:23.841 11:54:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:23.841 11:54:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.841 11:54:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.841 11:54:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.841 11:54:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:23.841 11:54:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:23.841 11:54:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:23.841 11:54:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:23.841 11:54:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:23.841 11:54:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:23.841 11:54:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.841 11:54:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.841 11:54:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:23.841 11:54:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:24.101 11:54:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:24.101 11:54:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:24.101 11:54:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:24.101 11:54:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.101 11:54:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:24.101 11:54:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:24.101 11:54:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:24.101 11:54:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:24.101 11:54:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:24.101 11:54:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:24.101 Cannot find device "nvmf_tgt_br" 00:27:24.101 11:54:56 -- nvmf/common.sh@154 -- # true 00:27:24.101 11:54:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:24.101 Cannot find device "nvmf_tgt_br2" 00:27:24.101 11:54:56 -- nvmf/common.sh@155 -- # true 00:27:24.101 11:54:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:24.101 11:54:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:24.101 Cannot find device "nvmf_tgt_br" 00:27:24.101 11:54:56 -- nvmf/common.sh@157 -- # true 00:27:24.101 11:54:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:24.101 Cannot find device "nvmf_tgt_br2" 00:27:24.101 11:54:56 -- nvmf/common.sh@158 -- # true 00:27:24.101 11:54:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:24.101 11:54:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:24.101 11:54:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:24.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:24.101 11:54:57 -- nvmf/common.sh@161 -- # true 00:27:24.101 11:54:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:24.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:24.101 11:54:57 -- nvmf/common.sh@162 -- # true 00:27:24.101 11:54:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:24.101 11:54:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:24.101 11:54:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:24.101 11:54:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:24.101 11:54:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:24.101 11:54:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:24.101 11:54:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:24.101 11:54:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:24.101 11:54:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:24.101 11:54:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:24.101 11:54:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:24.101 11:54:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:24.101 11:54:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:24.101 11:54:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:24.101 11:54:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:24.101 11:54:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:24.101 11:54:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:24.101 11:54:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:24.101 11:54:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:24.101 11:54:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:24.360 11:54:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:24.360 11:54:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:24.360 11:54:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:24.360 11:54:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:24.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:27:24.360 00:27:24.360 --- 10.0.0.2 ping statistics --- 00:27:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.360 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:24.360 11:54:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:24.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:24.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:27:24.360 00:27:24.360 --- 10.0.0.3 ping statistics --- 00:27:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.360 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:24.360 11:54:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:24.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:27:24.360 00:27:24.360 --- 10.0.0.1 ping statistics --- 00:27:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.360 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:27:24.360 11:54:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.360 11:54:57 -- nvmf/common.sh@421 -- # return 0 00:27:24.360 11:54:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:24.360 11:54:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.360 11:54:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:24.360 11:54:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:24.360 11:54:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.360 11:54:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:24.360 11:54:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:24.360 11:54:57 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:24.360 11:54:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:24.360 11:54:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:24.360 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.360 11:54:57 -- nvmf/common.sh@469 -- # nvmfpid=87806 00:27:24.360 11:54:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:24.360 11:54:57 -- nvmf/common.sh@470 -- # waitforlisten 87806 00:27:24.360 11:54:57 -- common/autotest_common.sh@829 -- # '[' -z 87806 ']' 00:27:24.360 11:54:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.360 11:54:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.360 11:54:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.360 11:54:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.360 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.360 [2024-11-20 11:54:57.270514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:24.360 [2024-11-20 11:54:57.270584] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.620 [2024-11-20 11:54:57.406926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.620 [2024-11-20 11:54:57.486680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:24.620 [2024-11-20 11:54:57.486793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.620 [2024-11-20 11:54:57.486799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.620 [2024-11-20 11:54:57.486805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.620 [2024-11-20 11:54:57.486826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.217 11:54:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.217 11:54:58 -- common/autotest_common.sh@862 -- # return 0 00:27:25.217 11:54:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:25.217 11:54:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.217 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.217 11:54:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.217 11:54:58 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:25.217 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.217 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.217 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.217 11:54:58 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:27:25.217 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.217 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.217 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.217 11:54:58 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.217 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.217 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.476 [2024-11-20 11:54:58.261868] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.476 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.476 11:54:58 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:25.476 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 [2024-11-20 11:54:58.273967] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:25.477 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:25.477 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 null0 00:27:25.477 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:25.477 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 null1 00:27:25.477 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:25.477 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 null2 00:27:25.477 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:25.477 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 null3 00:27:25.477 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:27:25.477 11:54:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 11:54:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@47 -- # hostpid=87856 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:25.477 11:54:58 -- host/mdns_discovery.sh@48 -- # waitforlisten 87856 /tmp/host.sock 00:27:25.477 11:54:58 -- common/autotest_common.sh@829 -- # '[' -z 87856 ']' 00:27:25.477 11:54:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:25.477 11:54:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.477 11:54:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:25.477 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:25.477 11:54:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.477 11:54:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.477 [2024-11-20 11:54:58.393375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:25.477 [2024-11-20 11:54:58.393508] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87856 ] 00:27:25.736 [2024-11-20 11:54:58.529491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.736 [2024-11-20 11:54:58.608761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.736 [2024-11-20 11:54:58.608962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.303 11:54:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.303 11:54:59 -- common/autotest_common.sh@862 -- # return 0 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@57 -- # avahipid=87886 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@58 -- # sleep 1 00:27:26.303 11:54:59 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:26.563 Process 1065 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:26.563 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:26.563 Successfully dropped root privileges. 00:27:26.563 avahi-daemon 0.8 starting up. 00:27:26.563 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:26.563 Successfully called chroot(). 00:27:26.563 Successfully dropped remaining capabilities. 00:27:26.563 No service file found in /etc/avahi/services. 00:27:26.563 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:26.563 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:26.563 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:26.563 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:26.563 Network interface enumeration completed. 00:27:26.563 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:27:26.563 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:26.563 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:27:26.563 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:27.130 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 915118574. 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:27.390 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.390 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.390 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:27.390 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.390 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.390 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:27.390 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@68 -- # sort 00:27:27.390 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@68 -- # xargs 00:27:27.390 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@64 -- # sort 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.390 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.390 11:55:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:27.390 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # xargs 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:27.648 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.648 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # xargs 00:27:27.648 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # sort 00:27:27.648 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.648 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.648 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # sort 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # xargs 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:27.648 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.648 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:27.648 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.648 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # sort 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@68 -- # xargs 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:27:27.648 [2024-11-20 11:55:00.665548] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # sort 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:27.648 11:55:00 -- host/mdns_discovery.sh@64 -- # xargs 00:27:27.648 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.648 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.648 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 [2024-11-20 11:55:00.717792] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 [2024-11-20 11:55:00.777641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:27.907 11:55:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.907 11:55:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 [2024-11-20 11:55:00.789613] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:27.907 11:55:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=87937 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:27:27.907 11:55:00 -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:28.844 [2024-11-20 11:55:01.563839] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:28.844 Established under name 'CDC' 00:27:29.102 [2024-11-20 11:55:01.963063] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:29.102 [2024-11-20 11:55:01.963080] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:27:29.102 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:29.102 cookie is 0 00:27:29.102 is_local: 1 00:27:29.102 our_own: 0 00:27:29.102 wide_area: 0 00:27:29.102 multicast: 1 00:27:29.102 cached: 1 00:27:29.102 [2024-11-20 11:55:02.062865] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:29.102 [2024-11-20 11:55:02.062877] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:27:29.102 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:29.102 cookie is 0 00:27:29.102 is_local: 1 00:27:29.102 our_own: 0 00:27:29.102 wide_area: 0 00:27:29.102 multicast: 1 00:27:29.102 cached: 1 00:27:30.036 [2024-11-20 11:55:02.966245] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:30.036 [2024-11-20 11:55:02.966265] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:30.036 [2024-11-20 11:55:02.966276] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:30.036 [2024-11-20 11:55:03.052153] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:30.036 [2024-11-20 11:55:03.065818] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.036 [2024-11-20 11:55:03.065832] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.036 [2024-11-20 11:55:03.065848] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.295 [2024-11-20 11:55:03.109823] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:30.295 [2024-11-20 11:55:03.109842] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:30.295 [2024-11-20 11:55:03.152826] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:30.295 [2024-11-20 11:55:03.207213] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:30.295 [2024-11-20 11:55:03.207233] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:32.832 11:55:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.832 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@80 -- # sort 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@80 -- # xargs 00:27:32.832 11:55:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@76 -- # sort 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:32.832 11:55:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.832 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:27:32.832 11:55:05 -- host/mdns_discovery.sh@76 -- # xargs 00:27:33.089 11:55:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:33.089 11:55:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.089 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@68 -- # sort 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@68 -- # xargs 00:27:33.089 11:55:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@64 -- # xargs 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.089 11:55:05 -- host/mdns_discovery.sh@64 -- # sort 00:27:33.089 11:55:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.089 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 11:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # xargs 00:27:33.089 11:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:33.089 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 11:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@72 -- # xargs 00:27:33.089 11:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.089 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 11:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:33.089 11:55:06 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:33.089 11:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.089 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 11:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.347 11:55:06 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:27:33.347 11:55:06 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:27:33.347 11:55:06 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:33.347 11:55:06 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:33.347 11:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.347 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.347 11:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.347 11:55:06 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:33.347 11:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.347 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.347 11:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.347 11:55:06 -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:34.282 11:55:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.282 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@64 -- # sort 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@64 -- # xargs 00:27:34.282 11:55:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:34.282 11:55:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.282 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:27:34.282 11:55:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:34.282 11:55:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.282 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:27:34.282 [2024-11-20 11:55:07.304577] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.282 [2024-11-20 11:55:07.305004] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:34.282 [2024-11-20 11:55:07.305078] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.282 [2024-11-20 11:55:07.305109] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:34.282 [2024-11-20 11:55:07.305119] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:34.282 11:55:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:34.282 11:55:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.282 11:55:07 -- common/autotest_common.sh@10 -- # set +x 00:27:34.282 [2024-11-20 11:55:07.316482] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:34.282 [2024-11-20 11:55:07.316973] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:34.282 [2024-11-20 11:55:07.317006] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:34.282 11:55:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.282 11:55:07 -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:34.541 [2024-11-20 11:55:07.447863] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:34.541 [2024-11-20 11:55:07.447991] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:34.541 [2024-11-20 11:55:07.510932] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:34.541 [2024-11-20 11:55:07.510948] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:34.541 [2024-11-20 11:55:07.510952] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:34.541 [2024-11-20 11:55:07.510963] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.541 [2024-11-20 11:55:07.511027] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:34.541 [2024-11-20 11:55:07.511031] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:34.541 [2024-11-20 11:55:07.511034] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:34.541 [2024-11-20 11:55:07.511041] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:34.541 [2024-11-20 11:55:07.556858] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:34.541 [2024-11-20 11:55:07.556870] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:34.541 [2024-11-20 11:55:07.556892] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:34.541 [2024-11-20 11:55:07.556896] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:35.480 11:55:08 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:35.480 11:55:08 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.480 11:55:08 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:35.480 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.480 11:55:08 -- host/mdns_discovery.sh@68 -- # sort 00:27:35.480 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.480 11:55:08 -- host/mdns_discovery.sh@68 -- # xargs 00:27:35.480 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@64 -- # xargs 00:27:35.481 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.481 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@64 -- # sort 00:27:35.481 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # xargs 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:35.481 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.481 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.481 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:35.481 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.481 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:35.481 11:55:08 -- host/mdns_discovery.sh@72 -- # xargs 00:27:35.481 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:35.744 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:35.744 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.744 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.744 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.744 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.744 [2024-11-20 11:55:08.588142] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:35.744 [2024-11-20 11:55:08.588170] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:35.744 [2024-11-20 11:55:08.588191] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:35.744 [2024-11-20 11:55:08.588200] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:35.744 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:35.744 11:55:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.744 11:55:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.744 [2024-11-20 11:55:08.595120] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:35.744 [2024-11-20 11:55:08.595153] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:35.744 [2024-11-20 11:55:08.596181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.596214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.596222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.596228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.596234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.596240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.596246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.596251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.596256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.744 [2024-11-20 11:55:08.598211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.598233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.598239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.598260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.598266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.598271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.598277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.744 [2024-11-20 11:55:08.598283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.744 [2024-11-20 11:55:08.598287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.744 11:55:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.744 11:55:08 -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:35.744 [2024-11-20 11:55:08.606134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.608171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.616128] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.744 [2024-11-20 11:55:08.616226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.616251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.616259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.744 [2024-11-20 11:55:08.616265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.744 [2024-11-20 11:55:08.616275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.616284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.744 [2024-11-20 11:55:08.616289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.744 [2024-11-20 11:55:08.616296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.744 [2024-11-20 11:55:08.616305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.744 [2024-11-20 11:55:08.618160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.744 [2024-11-20 11:55:08.618227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.618249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.618256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.744 [2024-11-20 11:55:08.618262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.744 [2024-11-20 11:55:08.618271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.618279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.744 [2024-11-20 11:55:08.618284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.744 [2024-11-20 11:55:08.618289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.744 [2024-11-20 11:55:08.618297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.744 [2024-11-20 11:55:08.626150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.744 [2024-11-20 11:55:08.626215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.626236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.626243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.744 [2024-11-20 11:55:08.626249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.744 [2024-11-20 11:55:08.626257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.626265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.744 [2024-11-20 11:55:08.626269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.744 [2024-11-20 11:55:08.626274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.744 [2024-11-20 11:55:08.626282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.744 [2024-11-20 11:55:08.628175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.744 [2024-11-20 11:55:08.628224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.628247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.628254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.744 [2024-11-20 11:55:08.628260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.744 [2024-11-20 11:55:08.628268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.628275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.744 [2024-11-20 11:55:08.628280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.744 [2024-11-20 11:55:08.628285] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.744 [2024-11-20 11:55:08.628293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.744 [2024-11-20 11:55:08.636165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.744 [2024-11-20 11:55:08.636231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.636253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.744 [2024-11-20 11:55:08.636260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.744 [2024-11-20 11:55:08.636265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.744 [2024-11-20 11:55:08.636274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.744 [2024-11-20 11:55:08.636281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.744 [2024-11-20 11:55:08.636285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.636290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.745 [2024-11-20 11:55:08.636298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.638191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.745 [2024-11-20 11:55:08.638250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.638271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.638278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.745 [2024-11-20 11:55:08.638283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.638291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.638299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.638303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.638308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.745 [2024-11-20 11:55:08.638316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.646182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.745 [2024-11-20 11:55:08.646253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.646275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.646282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.745 [2024-11-20 11:55:08.646287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.646296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.646303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.646308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.646312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.745 [2024-11-20 11:55:08.646320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.648203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.745 [2024-11-20 11:55:08.648252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.648273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.648281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.745 [2024-11-20 11:55:08.648287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.648295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.648302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.648306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.648311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.745 [2024-11-20 11:55:08.648319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.656200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.745 [2024-11-20 11:55:08.656249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.656286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.656293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.745 [2024-11-20 11:55:08.656299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.656307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.656314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.656319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.656324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.745 [2024-11-20 11:55:08.656332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.658220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.745 [2024-11-20 11:55:08.658278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.658298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.658306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.745 [2024-11-20 11:55:08.658311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.658319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.658327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.658331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.658336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.745 [2024-11-20 11:55:08.658344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.666215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.745 [2024-11-20 11:55:08.666262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.666299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.666306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.745 [2024-11-20 11:55:08.666311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.666320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.666327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.666332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.666337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.745 [2024-11-20 11:55:08.666344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.668231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.745 [2024-11-20 11:55:08.668277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.668299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.668306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.745 [2024-11-20 11:55:08.668312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.668320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.668327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.668332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.668337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.745 [2024-11-20 11:55:08.668345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.676229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.745 [2024-11-20 11:55:08.676291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.676312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.676319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.745 [2024-11-20 11:55:08.676325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.676333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.676340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.676344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.676349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.745 [2024-11-20 11:55:08.676357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.678246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.745 [2024-11-20 11:55:08.678305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.678326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.745 [2024-11-20 11:55:08.678334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.745 [2024-11-20 11:55:08.678339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.745 [2024-11-20 11:55:08.678347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.745 [2024-11-20 11:55:08.678354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.745 [2024-11-20 11:55:08.678359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.745 [2024-11-20 11:55:08.678363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.745 [2024-11-20 11:55:08.678371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.745 [2024-11-20 11:55:08.686243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.745 [2024-11-20 11:55:08.686316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.686339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.686346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.746 [2024-11-20 11:55:08.686351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.686360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.686367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.686371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.686376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.746 [2024-11-20 11:55:08.686384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.688258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.746 [2024-11-20 11:55:08.688309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.688330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.688337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.746 [2024-11-20 11:55:08.688343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.688351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.688358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.688362] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.688367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.746 [2024-11-20 11:55:08.688375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.696263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.746 [2024-11-20 11:55:08.696311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.696348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.696355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.746 [2024-11-20 11:55:08.696361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.696369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.696376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.696380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.696385] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.746 [2024-11-20 11:55:08.696393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.698276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.746 [2024-11-20 11:55:08.698334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.698355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.698363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.746 [2024-11-20 11:55:08.698368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.698376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.698384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.698388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.698393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.746 [2024-11-20 11:55:08.698401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.706278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.746 [2024-11-20 11:55:08.706340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.706361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.706368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.746 [2024-11-20 11:55:08.706373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.706381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.706388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.706393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.706397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.746 [2024-11-20 11:55:08.706405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.708286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.746 [2024-11-20 11:55:08.708335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.708357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.708364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.746 [2024-11-20 11:55:08.708370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.708378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.708385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.708390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.708394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.746 [2024-11-20 11:55:08.708402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.716291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.746 [2024-11-20 11:55:08.716339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.716376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.716383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2169b70 with addr=10.0.0.2, port=4420 00:27:35.746 [2024-11-20 11:55:08.716388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169b70 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.716397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169b70 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.716404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.716408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.716413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.746 [2024-11-20 11:55:08.716421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.718302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:35.746 [2024-11-20 11:55:08.718360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.718380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.746 [2024-11-20 11:55:08.718388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21054e0 with addr=10.0.0.3, port=4420 00:27:35.746 [2024-11-20 11:55:08.718393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21054e0 is same with the state(5) to be set 00:27:35.746 [2024-11-20 11:55:08.718401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21054e0 (9): Bad file descriptor 00:27:35.746 [2024-11-20 11:55:08.718408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:35.746 [2024-11-20 11:55:08.718413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:35.746 [2024-11-20 11:55:08.718418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:35.746 [2024-11-20 11:55:08.718425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.746 [2024-11-20 11:55:08.725957] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:35.746 [2024-11-20 11:55:08.725978] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:35.746 [2024-11-20 11:55:08.726006] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:35.746 [2024-11-20 11:55:08.726025] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:35.746 [2024-11-20 11:55:08.726035] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:35.746 [2024-11-20 11:55:08.726042] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:36.006 [2024-11-20 11:55:08.811856] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:36.006 [2024-11-20 11:55:08.811895] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:36.575 11:55:09 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:36.575 11:55:09 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:36.575 11:55:09 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:36.575 11:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.575 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.575 11:55:09 -- host/mdns_discovery.sh@68 -- # sort 00:27:36.575 11:55:09 -- host/mdns_discovery.sh@68 -- # xargs 00:27:36.835 11:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@64 -- # sort 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@64 -- # xargs 00:27:36.835 11:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.835 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.835 11:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:36.835 11:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.835 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # xargs 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:36.835 11:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@72 -- # xargs 00:27:36.835 11:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.835 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.835 11:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:36.835 11:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:36.835 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.835 11:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:36.835 11:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.835 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.835 11:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.835 11:55:09 -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:37.095 [2024-11-20 11:55:09.947714] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:38.040 11:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.040 11:55:10 -- common/autotest_common.sh@10 -- # set +x 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@80 -- # sort 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@80 -- # xargs 00:27:38.040 11:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@68 -- # sort 00:27:38.040 11:55:10 -- host/mdns_discovery.sh@68 -- # xargs 00:27:38.040 11:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.040 11:55:10 -- common/autotest_common.sh@10 -- # set +x 00:27:38.040 11:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.041 11:55:10 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:38.041 11:55:10 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:38.041 11:55:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.041 11:55:10 -- host/mdns_discovery.sh@64 -- # sort 00:27:38.041 11:55:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:38.041 11:55:10 -- host/mdns_discovery.sh@64 -- # xargs 00:27:38.041 11:55:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.041 11:55:10 -- common/autotest_common.sh@10 -- # set +x 00:27:38.041 11:55:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:38.041 11:55:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:38.041 11:55:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.041 11:55:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:27:38.041 11:55:11 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:38.309 11:55:11 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:38.309 11:55:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.309 11:55:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 11:55:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.309 11:55:11 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:38.309 11:55:11 -- common/autotest_common.sh@650 -- # local es=0 00:27:38.309 11:55:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:38.309 11:55:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:38.309 11:55:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.309 11:55:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:38.309 11:55:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.309 11:55:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:38.309 11:55:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.309 11:55:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 [2024-11-20 11:55:11.100624] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:38.309 2024/11/20 11:55:11 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:38.309 request: 00:27:38.309 { 00:27:38.309 "method": "bdev_nvme_start_mdns_discovery", 00:27:38.309 "params": { 00:27:38.309 "name": "mdns", 00:27:38.309 "svcname": "_nvme-disc._http", 00:27:38.309 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:38.309 } 00:27:38.309 } 00:27:38.309 Got JSON-RPC error response 00:27:38.309 GoRPCClient: error on JSON-RPC call 00:27:38.309 11:55:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:38.309 11:55:11 -- common/autotest_common.sh@653 -- # es=1 00:27:38.309 11:55:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:38.309 11:55:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:38.309 11:55:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:38.309 11:55:11 -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:38.570 [2024-11-20 11:55:11.484340] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:38.570 [2024-11-20 11:55:11.584151] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:38.830 [2024-11-20 11:55:11.683962] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:38.830 [2024-11-20 11:55:11.684008] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:27:38.830 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:38.830 cookie is 0 00:27:38.830 is_local: 1 00:27:38.830 our_own: 0 00:27:38.830 wide_area: 0 00:27:38.830 multicast: 1 00:27:38.830 cached: 1 00:27:38.830 [2024-11-20 11:55:11.783768] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:38.830 [2024-11-20 11:55:11.783837] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:27:38.830 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:38.830 cookie is 0 00:27:38.830 is_local: 1 00:27:38.830 our_own: 0 00:27:38.830 wide_area: 0 00:27:38.830 multicast: 1 00:27:38.830 cached: 1 00:27:39.767 [2024-11-20 11:55:12.688886] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:39.767 [2024-11-20 11:55:12.688943] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:39.767 [2024-11-20 11:55:12.688973] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:39.767 [2024-11-20 11:55:12.774803] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:39.767 [2024-11-20 11:55:12.788546] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:39.767 [2024-11-20 11:55:12.788604] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:39.767 [2024-11-20 11:55:12.788645] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:40.026 [2024-11-20 11:55:12.838202] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:40.026 [2024-11-20 11:55:12.838289] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:40.026 [2024-11-20 11:55:12.874343] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:40.026 [2024-11-20 11:55:12.932790] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:40.026 [2024-11-20 11:55:12.932859] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:43.313 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@80 -- # sort 00:27:43.313 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@80 -- # xargs 00:27:43.313 11:55:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:43.313 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # sort 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # xargs 00:27:43.313 11:55:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@64 -- # xargs 00:27:43.313 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@64 -- # sort 00:27:43.313 11:55:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:43.313 11:55:16 -- common/autotest_common.sh@650 -- # local es=0 00:27:43.313 11:55:16 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:43.313 11:55:16 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:43.313 11:55:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.313 11:55:16 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:43.313 11:55:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.313 11:55:16 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:43.313 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 [2024-11-20 11:55:16.290432] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:43.313 2024/11/20 11:55:16 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:43.313 request: 00:27:43.313 { 00:27:43.313 "method": "bdev_nvme_start_mdns_discovery", 00:27:43.313 "params": { 00:27:43.313 "name": "cdc", 00:27:43.313 "svcname": "_nvme-disc._tcp", 00:27:43.313 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:43.313 } 00:27:43.313 } 00:27:43.313 Got JSON-RPC error response 00:27:43.313 GoRPCClient: error on JSON-RPC call 00:27:43.313 11:55:16 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:43.313 11:55:16 -- common/autotest_common.sh@653 -- # es=1 00:27:43.313 11:55:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:43.313 11:55:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:43.313 11:55:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:43.313 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # xargs 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@76 -- # sort 00:27:43.313 11:55:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:43.313 11:55:16 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@64 -- # sort 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@64 -- # xargs 00:27:43.573 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.573 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.573 11:55:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:43.573 11:55:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.573 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.573 11:55:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@197 -- # kill 87856 00:27:43.573 11:55:16 -- host/mdns_discovery.sh@200 -- # wait 87856 00:27:43.573 [2024-11-20 11:55:16.504199] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:43.834 11:55:16 -- host/mdns_discovery.sh@201 -- # kill 87937 00:27:43.834 Got SIGTERM, quitting. 00:27:43.834 11:55:16 -- host/mdns_discovery.sh@202 -- # kill 87886 00:27:43.834 11:55:16 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:27:43.834 11:55:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:43.834 Got SIGTERM, quitting. 00:27:43.834 11:55:16 -- nvmf/common.sh@116 -- # sync 00:27:43.834 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:43.834 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:43.834 avahi-daemon 0.8 exiting. 00:27:43.834 11:55:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:43.834 11:55:16 -- nvmf/common.sh@119 -- # set +e 00:27:43.834 11:55:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:43.834 11:55:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:43.834 rmmod nvme_tcp 00:27:43.834 rmmod nvme_fabrics 00:27:43.834 rmmod nvme_keyring 00:27:43.834 11:55:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:43.834 11:55:16 -- nvmf/common.sh@123 -- # set -e 00:27:43.834 11:55:16 -- nvmf/common.sh@124 -- # return 0 00:27:43.834 11:55:16 -- nvmf/common.sh@477 -- # '[' -n 87806 ']' 00:27:43.834 11:55:16 -- nvmf/common.sh@478 -- # killprocess 87806 00:27:43.834 11:55:16 -- common/autotest_common.sh@936 -- # '[' -z 87806 ']' 00:27:43.834 11:55:16 -- common/autotest_common.sh@940 -- # kill -0 87806 00:27:43.834 11:55:16 -- common/autotest_common.sh@941 -- # uname 00:27:43.834 11:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:43.834 11:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87806 00:27:43.834 11:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:43.834 11:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:43.834 killing process with pid 87806 00:27:43.834 11:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87806' 00:27:43.834 11:55:16 -- common/autotest_common.sh@955 -- # kill 87806 00:27:43.834 11:55:16 -- common/autotest_common.sh@960 -- # wait 87806 00:27:44.094 11:55:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.094 11:55:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:44.094 11:55:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:44.094 11:55:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.094 11:55:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:44.094 11:55:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.094 11:55:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.094 11:55:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.094 11:55:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:44.094 00:27:44.094 real 0m20.405s 00:27:44.094 user 0m39.589s 00:27:44.094 sys 0m2.174s 00:27:44.094 11:55:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:44.094 11:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:44.094 ************************************ 00:27:44.094 END TEST nvmf_mdns_discovery 00:27:44.094 ************************************ 00:27:44.094 11:55:17 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:27:44.094 11:55:17 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:44.094 11:55:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:44.094 11:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:44.094 11:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:44.094 ************************************ 00:27:44.094 START TEST nvmf_multipath 00:27:44.094 ************************************ 00:27:44.094 11:55:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:44.355 * Looking for test storage... 00:27:44.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:44.355 11:55:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:44.355 11:55:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:44.355 11:55:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:44.355 11:55:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:44.355 11:55:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:44.355 11:55:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:44.355 11:55:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:44.355 11:55:17 -- scripts/common.sh@335 -- # IFS=.-: 00:27:44.355 11:55:17 -- scripts/common.sh@335 -- # read -ra ver1 00:27:44.355 11:55:17 -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.355 11:55:17 -- scripts/common.sh@336 -- # read -ra ver2 00:27:44.355 11:55:17 -- scripts/common.sh@337 -- # local 'op=<' 00:27:44.355 11:55:17 -- scripts/common.sh@339 -- # ver1_l=2 00:27:44.355 11:55:17 -- scripts/common.sh@340 -- # ver2_l=1 00:27:44.355 11:55:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:44.355 11:55:17 -- scripts/common.sh@343 -- # case "$op" in 00:27:44.355 11:55:17 -- scripts/common.sh@344 -- # : 1 00:27:44.355 11:55:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:44.355 11:55:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.355 11:55:17 -- scripts/common.sh@364 -- # decimal 1 00:27:44.355 11:55:17 -- scripts/common.sh@352 -- # local d=1 00:27:44.355 11:55:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.355 11:55:17 -- scripts/common.sh@354 -- # echo 1 00:27:44.355 11:55:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:44.355 11:55:17 -- scripts/common.sh@365 -- # decimal 2 00:27:44.355 11:55:17 -- scripts/common.sh@352 -- # local d=2 00:27:44.355 11:55:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.355 11:55:17 -- scripts/common.sh@354 -- # echo 2 00:27:44.355 11:55:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:44.355 11:55:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:44.356 11:55:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:44.356 11:55:17 -- scripts/common.sh@367 -- # return 0 00:27:44.356 11:55:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.356 11:55:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.356 --rc genhtml_branch_coverage=1 00:27:44.356 --rc genhtml_function_coverage=1 00:27:44.356 --rc genhtml_legend=1 00:27:44.356 --rc geninfo_all_blocks=1 00:27:44.356 --rc geninfo_unexecuted_blocks=1 00:27:44.356 00:27:44.356 ' 00:27:44.356 11:55:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.356 --rc genhtml_branch_coverage=1 00:27:44.356 --rc genhtml_function_coverage=1 00:27:44.356 --rc genhtml_legend=1 00:27:44.356 --rc geninfo_all_blocks=1 00:27:44.356 --rc geninfo_unexecuted_blocks=1 00:27:44.356 00:27:44.356 ' 00:27:44.356 11:55:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.356 --rc genhtml_branch_coverage=1 00:27:44.356 --rc genhtml_function_coverage=1 00:27:44.356 --rc genhtml_legend=1 00:27:44.356 --rc geninfo_all_blocks=1 00:27:44.356 --rc geninfo_unexecuted_blocks=1 00:27:44.356 00:27:44.356 ' 00:27:44.356 11:55:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.356 --rc genhtml_branch_coverage=1 00:27:44.356 --rc genhtml_function_coverage=1 00:27:44.356 --rc genhtml_legend=1 00:27:44.356 --rc geninfo_all_blocks=1 00:27:44.356 --rc geninfo_unexecuted_blocks=1 00:27:44.356 00:27:44.356 ' 00:27:44.356 11:55:17 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:44.356 11:55:17 -- nvmf/common.sh@7 -- # uname -s 00:27:44.356 11:55:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.356 11:55:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.356 11:55:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.356 11:55:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.356 11:55:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.356 11:55:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.356 11:55:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.356 11:55:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.356 11:55:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.356 11:55:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.356 11:55:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:27:44.356 11:55:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:27:44.356 11:55:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.356 11:55:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.356 11:55:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:44.356 11:55:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:44.356 11:55:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.356 11:55:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.356 11:55:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.356 11:55:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.356 11:55:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.356 11:55:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.356 11:55:17 -- paths/export.sh@5 -- # export PATH 00:27:44.356 11:55:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.356 11:55:17 -- nvmf/common.sh@46 -- # : 0 00:27:44.356 11:55:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:44.356 11:55:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:44.356 11:55:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:44.356 11:55:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.356 11:55:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.356 11:55:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:44.356 11:55:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:44.356 11:55:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:44.356 11:55:17 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:44.356 11:55:17 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:44.356 11:55:17 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:44.356 11:55:17 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:44.356 11:55:17 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:44.356 11:55:17 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:44.356 11:55:17 -- host/multipath.sh@30 -- # nvmftestinit 00:27:44.356 11:55:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:44.356 11:55:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.356 11:55:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:44.356 11:55:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:44.356 11:55:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:44.356 11:55:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.356 11:55:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.356 11:55:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.356 11:55:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:44.356 11:55:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:44.356 11:55:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:44.356 11:55:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:44.356 11:55:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:44.356 11:55:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:44.356 11:55:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.356 11:55:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.356 11:55:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:44.356 11:55:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:44.356 11:55:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:44.356 11:55:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:44.356 11:55:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:44.356 11:55:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.356 11:55:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:44.356 11:55:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:44.356 11:55:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:44.356 11:55:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:44.356 11:55:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:44.356 11:55:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:44.356 Cannot find device "nvmf_tgt_br" 00:27:44.356 11:55:17 -- nvmf/common.sh@154 -- # true 00:27:44.356 11:55:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:44.617 Cannot find device "nvmf_tgt_br2" 00:27:44.617 11:55:17 -- nvmf/common.sh@155 -- # true 00:27:44.617 11:55:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:44.617 11:55:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:44.617 Cannot find device "nvmf_tgt_br" 00:27:44.617 11:55:17 -- nvmf/common.sh@157 -- # true 00:27:44.617 11:55:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:44.617 Cannot find device "nvmf_tgt_br2" 00:27:44.617 11:55:17 -- nvmf/common.sh@158 -- # true 00:27:44.617 11:55:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:44.617 11:55:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:44.617 11:55:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:44.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:44.617 11:55:17 -- nvmf/common.sh@161 -- # true 00:27:44.617 11:55:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:44.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:44.617 11:55:17 -- nvmf/common.sh@162 -- # true 00:27:44.617 11:55:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:44.617 11:55:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:44.617 11:55:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:44.617 11:55:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:44.617 11:55:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:44.617 11:55:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:44.617 11:55:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:44.617 11:55:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:44.617 11:55:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:44.617 11:55:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:44.617 11:55:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:44.617 11:55:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:44.617 11:55:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:44.617 11:55:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:44.617 11:55:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:44.617 11:55:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:44.617 11:55:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:44.617 11:55:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:44.617 11:55:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:44.617 11:55:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:44.617 11:55:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:44.617 11:55:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:44.617 11:55:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:44.617 11:55:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:44.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:27:44.617 00:27:44.617 --- 10.0.0.2 ping statistics --- 00:27:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.617 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:27:44.617 11:55:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:44.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:44.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:27:44.617 00:27:44.617 --- 10.0.0.3 ping statistics --- 00:27:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.617 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:44.617 11:55:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:44.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:44.617 00:27:44.617 --- 10.0.0.1 ping statistics --- 00:27:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.617 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:44.617 11:55:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.617 11:55:17 -- nvmf/common.sh@421 -- # return 0 00:27:44.617 11:55:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:44.617 11:55:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.617 11:55:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:44.617 11:55:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:44.617 11:55:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.617 11:55:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:44.617 11:55:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:44.877 11:55:17 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:44.877 11:55:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:44.877 11:55:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.877 11:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:44.877 11:55:17 -- nvmf/common.sh@469 -- # nvmfpid=88462 00:27:44.877 11:55:17 -- nvmf/common.sh@470 -- # waitforlisten 88462 00:27:44.877 11:55:17 -- common/autotest_common.sh@829 -- # '[' -z 88462 ']' 00:27:44.877 11:55:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:44.877 11:55:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.877 11:55:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.877 11:55:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.877 11:55:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.877 11:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:44.877 [2024-11-20 11:55:17.732588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.878 [2024-11-20 11:55:17.732660] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.878 [2024-11-20 11:55:17.869104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:45.138 [2024-11-20 11:55:17.946489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:45.138 [2024-11-20 11:55:17.946630] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.138 [2024-11-20 11:55:17.946636] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.138 [2024-11-20 11:55:17.946641] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.138 [2024-11-20 11:55:17.946869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.138 [2024-11-20 11:55:17.946870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.708 11:55:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.708 11:55:18 -- common/autotest_common.sh@862 -- # return 0 00:27:45.708 11:55:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:45.708 11:55:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.708 11:55:18 -- common/autotest_common.sh@10 -- # set +x 00:27:45.708 11:55:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.708 11:55:18 -- host/multipath.sh@33 -- # nvmfapp_pid=88462 00:27:45.708 11:55:18 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:45.968 [2024-11-20 11:55:18.780629] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.968 11:55:18 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:45.968 Malloc0 00:27:45.968 11:55:18 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:46.229 11:55:19 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.489 11:55:19 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.489 [2024-11-20 11:55:19.526186] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.749 11:55:19 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:46.749 [2024-11-20 11:55:19.701932] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:46.749 11:55:19 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:46.749 11:55:19 -- host/multipath.sh@44 -- # bdevperf_pid=88559 00:27:46.749 11:55:19 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:46.749 11:55:19 -- host/multipath.sh@47 -- # waitforlisten 88559 /var/tmp/bdevperf.sock 00:27:46.749 11:55:19 -- common/autotest_common.sh@829 -- # '[' -z 88559 ']' 00:27:46.749 11:55:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:46.749 11:55:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:46.749 11:55:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:46.749 11:55:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.749 11:55:19 -- common/autotest_common.sh@10 -- # set +x 00:27:47.693 11:55:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.693 11:55:20 -- common/autotest_common.sh@862 -- # return 0 00:27:47.693 11:55:20 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:47.951 11:55:20 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:48.210 Nvme0n1 00:27:48.210 11:55:21 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:48.469 Nvme0n1 00:27:48.469 11:55:21 -- host/multipath.sh@78 -- # sleep 1 00:27:48.469 11:55:21 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:49.851 11:55:22 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:49.851 11:55:22 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:49.851 11:55:22 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:49.851 11:55:22 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:49.851 11:55:22 -- host/multipath.sh@65 -- # dtrace_pid=88642 00:27:49.851 11:55:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:49.851 11:55:22 -- host/multipath.sh@66 -- # sleep 6 00:27:56.429 11:55:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:56.429 11:55:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:56.429 11:55:29 -- host/multipath.sh@67 -- # active_port=4421 00:27:56.429 11:55:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:56.429 Attaching 4 probes... 00:27:56.429 @path[10.0.0.2, 4421]: 25962 00:27:56.429 @path[10.0.0.2, 4421]: 26597 00:27:56.429 @path[10.0.0.2, 4421]: 26476 00:27:56.429 @path[10.0.0.2, 4421]: 26191 00:27:56.429 @path[10.0.0.2, 4421]: 26401 00:27:56.429 11:55:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:56.429 11:55:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:56.429 11:55:29 -- host/multipath.sh@69 -- # sed -n 1p 00:27:56.429 11:55:29 -- host/multipath.sh@69 -- # port=4421 00:27:56.429 11:55:29 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:56.429 11:55:29 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:56.429 11:55:29 -- host/multipath.sh@72 -- # kill 88642 00:27:56.429 11:55:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:56.429 11:55:29 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:56.429 11:55:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:56.429 11:55:29 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:56.689 11:55:29 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:56.689 11:55:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:56.689 11:55:29 -- host/multipath.sh@65 -- # dtrace_pid=88773 00:27:56.689 11:55:29 -- host/multipath.sh@66 -- # sleep 6 00:28:03.262 11:55:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:03.262 11:55:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:03.262 11:55:35 -- host/multipath.sh@67 -- # active_port=4420 00:28:03.262 11:55:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.262 Attaching 4 probes... 00:28:03.262 @path[10.0.0.2, 4420]: 26207 00:28:03.262 @path[10.0.0.2, 4420]: 26595 00:28:03.262 @path[10.0.0.2, 4420]: 26562 00:28:03.262 @path[10.0.0.2, 4420]: 26650 00:28:03.262 @path[10.0.0.2, 4420]: 26789 00:28:03.262 11:55:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:03.262 11:55:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:03.262 11:55:35 -- host/multipath.sh@69 -- # sed -n 1p 00:28:03.262 11:55:35 -- host/multipath.sh@69 -- # port=4420 00:28:03.262 11:55:35 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:03.262 11:55:35 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:03.262 11:55:35 -- host/multipath.sh@72 -- # kill 88773 00:28:03.262 11:55:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.262 11:55:35 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:03.262 11:55:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:03.262 11:55:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:03.262 11:55:36 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:03.262 11:55:36 -- host/multipath.sh@65 -- # dtrace_pid=88904 00:28:03.262 11:55:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:03.262 11:55:36 -- host/multipath.sh@66 -- # sleep 6 00:28:09.832 11:55:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:09.832 11:55:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:09.832 11:55:42 -- host/multipath.sh@67 -- # active_port=4421 00:28:09.832 11:55:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:09.832 Attaching 4 probes... 00:28:09.832 @path[10.0.0.2, 4421]: 18892 00:28:09.832 @path[10.0.0.2, 4421]: 26050 00:28:09.832 @path[10.0.0.2, 4421]: 26212 00:28:09.832 @path[10.0.0.2, 4421]: 26095 00:28:09.832 @path[10.0.0.2, 4421]: 26171 00:28:09.832 11:55:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:09.832 11:55:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:09.832 11:55:42 -- host/multipath.sh@69 -- # sed -n 1p 00:28:09.832 11:55:42 -- host/multipath.sh@69 -- # port=4421 00:28:09.832 11:55:42 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:09.832 11:55:42 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:09.832 11:55:42 -- host/multipath.sh@72 -- # kill 88904 00:28:09.832 11:55:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:09.832 11:55:42 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:09.832 11:55:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:09.832 11:55:42 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:09.832 11:55:42 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:09.832 11:55:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:09.832 11:55:42 -- host/multipath.sh@65 -- # dtrace_pid=89034 00:28:09.832 11:55:42 -- host/multipath.sh@66 -- # sleep 6 00:28:16.407 11:55:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:16.407 11:55:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:16.407 11:55:48 -- host/multipath.sh@67 -- # active_port= 00:28:16.407 11:55:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:16.407 Attaching 4 probes... 00:28:16.407 00:28:16.407 00:28:16.407 00:28:16.407 00:28:16.408 00:28:16.408 11:55:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:16.408 11:55:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:16.408 11:55:48 -- host/multipath.sh@69 -- # sed -n 1p 00:28:16.408 11:55:48 -- host/multipath.sh@69 -- # port= 00:28:16.408 11:55:48 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:16.408 11:55:48 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:16.408 11:55:48 -- host/multipath.sh@72 -- # kill 89034 00:28:16.408 11:55:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:16.408 11:55:48 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:16.408 11:55:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:16.408 11:55:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:16.408 11:55:49 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:16.408 11:55:49 -- host/multipath.sh@65 -- # dtrace_pid=89165 00:28:16.408 11:55:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:16.408 11:55:49 -- host/multipath.sh@66 -- # sleep 6 00:28:22.984 11:55:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:22.984 11:55:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:22.984 11:55:55 -- host/multipath.sh@67 -- # active_port=4421 00:28:22.984 11:55:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:22.984 Attaching 4 probes... 00:28:22.984 @path[10.0.0.2, 4421]: 25453 00:28:22.984 @path[10.0.0.2, 4421]: 25900 00:28:22.984 @path[10.0.0.2, 4421]: 25866 00:28:22.984 @path[10.0.0.2, 4421]: 25940 00:28:22.984 @path[10.0.0.2, 4421]: 25895 00:28:22.984 11:55:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:22.984 11:55:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:22.984 11:55:55 -- host/multipath.sh@69 -- # sed -n 1p 00:28:22.984 11:55:55 -- host/multipath.sh@69 -- # port=4421 00:28:22.984 11:55:55 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:22.984 11:55:55 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:22.984 11:55:55 -- host/multipath.sh@72 -- # kill 89165 00:28:22.984 11:55:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:22.984 11:55:55 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:22.984 [2024-11-20 11:55:55.788304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.984 [2024-11-20 11:55:55.788503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 [2024-11-20 11:55:55.788849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c800 is same with the state(5) to be set 00:28:22.985 11:55:55 -- host/multipath.sh@101 -- # sleep 1 00:28:23.923 11:55:56 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:23.923 11:55:56 -- host/multipath.sh@65 -- # dtrace_pid=89300 00:28:23.923 11:55:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:23.923 11:55:56 -- host/multipath.sh@66 -- # sleep 6 00:28:30.518 11:56:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:30.518 11:56:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:30.518 11:56:03 -- host/multipath.sh@67 -- # active_port=4420 00:28:30.518 11:56:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:30.518 Attaching 4 probes... 00:28:30.518 @path[10.0.0.2, 4420]: 25690 00:28:30.518 @path[10.0.0.2, 4420]: 25887 00:28:30.518 @path[10.0.0.2, 4420]: 26009 00:28:30.518 @path[10.0.0.2, 4420]: 25947 00:28:30.518 @path[10.0.0.2, 4420]: 25954 00:28:30.518 11:56:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:30.518 11:56:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:30.518 11:56:03 -- host/multipath.sh@69 -- # sed -n 1p 00:28:30.518 11:56:03 -- host/multipath.sh@69 -- # port=4420 00:28:30.518 11:56:03 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:30.518 11:56:03 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:30.518 11:56:03 -- host/multipath.sh@72 -- # kill 89300 00:28:30.518 11:56:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:30.518 11:56:03 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:30.518 [2024-11-20 11:56:03.207919] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:30.518 11:56:03 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:30.518 11:56:03 -- host/multipath.sh@111 -- # sleep 6 00:28:37.093 11:56:09 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:37.093 11:56:09 -- host/multipath.sh@65 -- # dtrace_pid=89492 00:28:37.093 11:56:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:37.093 11:56:09 -- host/multipath.sh@66 -- # sleep 6 00:28:43.692 11:56:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:43.692 11:56:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:43.692 11:56:15 -- host/multipath.sh@67 -- # active_port=4421 00:28:43.692 11:56:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:43.692 Attaching 4 probes... 00:28:43.692 @path[10.0.0.2, 4421]: 25446 00:28:43.692 @path[10.0.0.2, 4421]: 25740 00:28:43.692 @path[10.0.0.2, 4421]: 25715 00:28:43.692 @path[10.0.0.2, 4421]: 25796 00:28:43.692 @path[10.0.0.2, 4421]: 25819 00:28:43.692 11:56:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:43.692 11:56:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:43.692 11:56:15 -- host/multipath.sh@69 -- # sed -n 1p 00:28:43.692 11:56:15 -- host/multipath.sh@69 -- # port=4421 00:28:43.692 11:56:15 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:43.692 11:56:15 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:43.692 11:56:15 -- host/multipath.sh@72 -- # kill 89492 00:28:43.692 11:56:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:43.692 11:56:15 -- host/multipath.sh@114 -- # killprocess 88559 00:28:43.692 11:56:15 -- common/autotest_common.sh@936 -- # '[' -z 88559 ']' 00:28:43.692 11:56:15 -- common/autotest_common.sh@940 -- # kill -0 88559 00:28:43.692 11:56:15 -- common/autotest_common.sh@941 -- # uname 00:28:43.692 11:56:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:43.692 11:56:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88559 00:28:43.692 killing process with pid 88559 00:28:43.692 11:56:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:43.692 11:56:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:43.692 11:56:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88559' 00:28:43.692 11:56:15 -- common/autotest_common.sh@955 -- # kill 88559 00:28:43.692 11:56:15 -- common/autotest_common.sh@960 -- # wait 88559 00:28:43.692 Connection closed with partial response: 00:28:43.692 00:28:43.692 00:28:43.692 11:56:15 -- host/multipath.sh@116 -- # wait 88559 00:28:43.692 11:56:15 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:43.692 [2024-11-20 11:55:19.756470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:43.692 [2024-11-20 11:55:19.756548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88559 ] 00:28:43.692 [2024-11-20 11:55:19.892466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.692 [2024-11-20 11:55:19.976663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.692 Running I/O for 90 seconds... 00:28:43.692 [2024-11-20 11:55:29.479014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.692 [2024-11-20 11:55:29.479078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.692 [2024-11-20 11:55:29.479133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:43.692 [2024-11-20 11:55:29.479443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-20 11:55:29.479452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.479616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.479680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.479704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.479727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.479752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.479775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.479789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.479799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-20 11:55:29.480607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.693 [2024-11-20 11:55:29.480631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:43.693 [2024-11-20 11:55:29.480645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.480772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.480797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.480821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.480868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.480917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.480942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.480980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.480994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.481018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.481113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.481162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.481210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.694 [2024-11-20 11:55:29.481259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-20 11:55:29.481383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:43.694 [2024-11-20 11:55:29.481397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.481734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.481759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.481808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.481856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.481880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.481910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.481935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.482616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.695 [2024-11-20 11:55:29.482640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.695 [2024-11-20 11:55:29.482716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.695 [2024-11-20 11:55:29.482725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.482838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.482910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.482934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.482981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.482995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.483005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.483029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.483053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.483076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.483104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.483128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.483152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.483178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.483202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:29.483227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:29.483242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:29.483252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:35.922227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.696 [2024-11-20 11:55:35.922328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.696 [2024-11-20 11:55:35.922482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:43.696 [2024-11-20 11:55:35.922496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.697 [2024-11-20 11:55:35.922726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.697 [2024-11-20 11:55:35.922816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.697 [2024-11-20 11:55:35.922861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.697 [2024-11-20 11:55:35.922906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.697 [2024-11-20 11:55:35.922928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.697 [2024-11-20 11:55:35.922950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.922990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.922999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.923013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.923021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.923035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.923044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.923058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.923067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.923080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.923089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:43.697 [2024-11-20 11:55:35.923102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.697 [2024-11-20 11:55:35.923111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.923762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.923819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.923943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.923969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.923986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.923995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.698 [2024-11-20 11:55:35.924601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.698 [2024-11-20 11:55:35.924726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:43.698 [2024-11-20 11:55:35.924745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.924754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.924781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.924809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.924837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.924865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.924894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.924921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.924954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.924973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.924982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:35.925323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:35.925539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:35.925548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:42.738212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:42.738268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:42.738311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.699 [2024-11-20 11:55:42.738322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:42.738338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:42.738348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:42.738380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.699 [2024-11-20 11:55:42.738389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:43.699 [2024-11-20 11:55:42.738404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.738413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.738436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.738461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.738484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.738507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.738736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.738762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.738786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.738810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.738834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.738849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.738860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.700 [2024-11-20 11:55:42.739582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.700 [2024-11-20 11:55:42.739607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:43.700 [2024-11-20 11:55:42.739623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.739633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.739669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.739696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.739982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.739992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.740019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.740104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.740188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.701 [2024-11-20 11:55:42.740303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:43.701 [2024-11-20 11:55:42.740642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.701 [2024-11-20 11:55:42.740663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.740691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.740719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.740747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.740775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.740831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.740942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.740973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.740993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.702 [2024-11-20 11:55:42.741576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.702 [2024-11-20 11:55:42.741674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:43.702 [2024-11-20 11:55:42.741694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.703 [2024-11-20 11:55:42.741704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.703 [2024-11-20 11:55:42.741797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.703 [2024-11-20 11:55:42.741897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:42.741982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.742000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.703 [2024-11-20 11:55:42.742009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.742028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.703 [2024-11-20 11:55:42.742041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:42.742060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.703 [2024-11-20 11:55:42.742069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.703 [2024-11-20 11:55:55.789376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.703 [2024-11-20 11:55:55.789385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.704 [2024-11-20 11:55:55.789664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.704 [2024-11-20 11:55:55.789835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.704 [2024-11-20 11:55:55.789952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.704 [2024-11-20 11:55:55.789970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.704 [2024-11-20 11:55:55.789989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.704 [2024-11-20 11:55:55.789999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.705 [2024-11-20 11:55:55.790448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.705 [2024-11-20 11:55:55.790704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.705 [2024-11-20 11:55:55.790714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.790817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.790840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.790934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.790952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.790971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.790989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.790999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.791187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.791205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.791224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.706 [2024-11-20 11:55:55.791299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.706 [2024-11-20 11:55:55.791322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.706 [2024-11-20 11:55:55.791336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.707 [2024-11-20 11:55:55.791345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.707 [2024-11-20 11:55:55.791551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b05b0 is same with the state(5) to be set 00:28:43.707 [2024-11-20 11:55:55.791576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:43.707 [2024-11-20 11:55:55.791583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:43.707 [2024-11-20 11:55:55.791590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3088 len:8 PRP1 0x0 PRP2 0x0 00:28:43.707 [2024-11-20 11:55:55.791599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791645] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b05b0 was disconnected and freed. reset controller. 00:28:43.707 [2024-11-20 11:55:55.791730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.707 [2024-11-20 11:55:55.791745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.707 [2024-11-20 11:55:55.791763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.707 [2024-11-20 11:55:55.791780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.707 [2024-11-20 11:55:55.791797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.707 [2024-11-20 11:55:55.791805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254790 is same with the state(5) to be set 00:28:43.707 [2024-11-20 11:55:55.792745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.707 [2024-11-20 11:55:55.792774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254790 (9): Bad file descriptor 00:28:43.707 [2024-11-20 11:55:55.792849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.707 [2024-11-20 11:55:55.792879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.707 [2024-11-20 11:55:55.792891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254790 with addr=10.0.0.2, port=4421 00:28:43.707 [2024-11-20 11:55:55.792912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254790 is same with the state(5) to be set 00:28:43.707 [2024-11-20 11:55:55.792926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254790 (9): Bad file descriptor 00:28:43.707 [2024-11-20 11:55:55.792939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.707 [2024-11-20 11:55:55.792947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.707 [2024-11-20 11:55:55.792955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.707 [2024-11-20 11:55:55.792972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.707 [2024-11-20 11:55:55.792980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.707 [2024-11-20 11:56:05.823962] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:43.707 Received shutdown signal, test time was about 54.234721 seconds 00:28:43.707 00:28:43.707 Latency(us) 00:28:43.707 [2024-11-20T11:56:16.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.707 [2024-11-20T11:56:16.750Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:43.707 Verification LBA range: start 0x0 length 0x4000 00:28:43.707 Nvme0n1 : 54.23 14783.71 57.75 0.00 0.00 8647.89 604.56 7033243.39 00:28:43.707 [2024-11-20T11:56:16.750Z] =================================================================================================================== 00:28:43.707 [2024-11-20T11:56:16.750Z] Total : 14783.71 57.75 0.00 0.00 8647.89 604.56 7033243.39 00:28:43.707 11:56:15 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.707 11:56:16 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:43.707 11:56:16 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:43.707 11:56:16 -- host/multipath.sh@125 -- # nvmftestfini 00:28:43.707 11:56:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:43.707 11:56:16 -- nvmf/common.sh@116 -- # sync 00:28:43.707 11:56:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:43.707 11:56:16 -- nvmf/common.sh@119 -- # set +e 00:28:43.707 11:56:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:43.707 11:56:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:43.707 rmmod nvme_tcp 00:28:43.707 rmmod nvme_fabrics 00:28:43.707 rmmod nvme_keyring 00:28:43.707 11:56:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:43.707 11:56:16 -- nvmf/common.sh@123 -- # set -e 00:28:43.707 11:56:16 -- nvmf/common.sh@124 -- # return 0 00:28:43.707 11:56:16 -- nvmf/common.sh@477 -- # '[' -n 88462 ']' 00:28:43.707 11:56:16 -- nvmf/common.sh@478 -- # killprocess 88462 00:28:43.707 11:56:16 -- common/autotest_common.sh@936 -- # '[' -z 88462 ']' 00:28:43.707 11:56:16 -- common/autotest_common.sh@940 -- # kill -0 88462 00:28:43.707 11:56:16 -- common/autotest_common.sh@941 -- # uname 00:28:43.707 11:56:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:43.707 11:56:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88462 00:28:43.707 killing process with pid 88462 00:28:43.707 11:56:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:43.708 11:56:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:43.708 11:56:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88462' 00:28:43.708 11:56:16 -- common/autotest_common.sh@955 -- # kill 88462 00:28:43.708 11:56:16 -- common/autotest_common.sh@960 -- # wait 88462 00:28:43.708 11:56:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:43.708 11:56:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:43.708 11:56:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:43.708 11:56:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:43.708 11:56:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:43.708 11:56:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.708 11:56:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.708 11:56:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.708 11:56:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:43.708 ************************************ 00:28:43.708 END TEST nvmf_multipath 00:28:43.708 ************************************ 00:28:43.708 00:28:43.708 real 0m59.480s 00:28:43.708 user 2m49.149s 00:28:43.708 sys 0m12.158s 00:28:43.708 11:56:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:43.708 11:56:16 -- common/autotest_common.sh@10 -- # set +x 00:28:43.708 11:56:16 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:43.708 11:56:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:43.708 11:56:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:43.708 11:56:16 -- common/autotest_common.sh@10 -- # set +x 00:28:43.708 ************************************ 00:28:43.708 START TEST nvmf_timeout 00:28:43.708 ************************************ 00:28:43.708 11:56:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:43.969 * Looking for test storage... 00:28:43.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:43.969 11:56:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:43.969 11:56:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:43.969 11:56:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:43.969 11:56:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:43.969 11:56:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:43.969 11:56:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:43.969 11:56:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:43.969 11:56:16 -- scripts/common.sh@335 -- # IFS=.-: 00:28:43.969 11:56:16 -- scripts/common.sh@335 -- # read -ra ver1 00:28:43.969 11:56:16 -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.969 11:56:16 -- scripts/common.sh@336 -- # read -ra ver2 00:28:43.969 11:56:16 -- scripts/common.sh@337 -- # local 'op=<' 00:28:43.969 11:56:16 -- scripts/common.sh@339 -- # ver1_l=2 00:28:43.969 11:56:16 -- scripts/common.sh@340 -- # ver2_l=1 00:28:43.969 11:56:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:43.969 11:56:16 -- scripts/common.sh@343 -- # case "$op" in 00:28:43.969 11:56:16 -- scripts/common.sh@344 -- # : 1 00:28:43.969 11:56:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:43.969 11:56:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.969 11:56:16 -- scripts/common.sh@364 -- # decimal 1 00:28:43.969 11:56:16 -- scripts/common.sh@352 -- # local d=1 00:28:43.969 11:56:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.969 11:56:16 -- scripts/common.sh@354 -- # echo 1 00:28:43.969 11:56:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:43.969 11:56:16 -- scripts/common.sh@365 -- # decimal 2 00:28:43.969 11:56:16 -- scripts/common.sh@352 -- # local d=2 00:28:43.969 11:56:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.969 11:56:16 -- scripts/common.sh@354 -- # echo 2 00:28:43.969 11:56:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:43.969 11:56:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:43.969 11:56:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:43.969 11:56:16 -- scripts/common.sh@367 -- # return 0 00:28:43.969 11:56:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.969 11:56:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.969 --rc genhtml_branch_coverage=1 00:28:43.969 --rc genhtml_function_coverage=1 00:28:43.969 --rc genhtml_legend=1 00:28:43.969 --rc geninfo_all_blocks=1 00:28:43.969 --rc geninfo_unexecuted_blocks=1 00:28:43.969 00:28:43.969 ' 00:28:43.969 11:56:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.969 --rc genhtml_branch_coverage=1 00:28:43.969 --rc genhtml_function_coverage=1 00:28:43.969 --rc genhtml_legend=1 00:28:43.969 --rc geninfo_all_blocks=1 00:28:43.969 --rc geninfo_unexecuted_blocks=1 00:28:43.969 00:28:43.969 ' 00:28:43.969 11:56:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.969 --rc genhtml_branch_coverage=1 00:28:43.969 --rc genhtml_function_coverage=1 00:28:43.969 --rc genhtml_legend=1 00:28:43.969 --rc geninfo_all_blocks=1 00:28:43.969 --rc geninfo_unexecuted_blocks=1 00:28:43.969 00:28:43.969 ' 00:28:43.969 11:56:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.969 --rc genhtml_branch_coverage=1 00:28:43.969 --rc genhtml_function_coverage=1 00:28:43.969 --rc genhtml_legend=1 00:28:43.969 --rc geninfo_all_blocks=1 00:28:43.969 --rc geninfo_unexecuted_blocks=1 00:28:43.969 00:28:43.969 ' 00:28:43.969 11:56:16 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:43.969 11:56:16 -- nvmf/common.sh@7 -- # uname -s 00:28:43.969 11:56:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.969 11:56:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.969 11:56:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.969 11:56:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.969 11:56:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.969 11:56:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.969 11:56:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.969 11:56:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.969 11:56:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.969 11:56:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.969 11:56:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:28:43.969 11:56:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:28:43.969 11:56:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.969 11:56:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.969 11:56:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:43.969 11:56:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:43.969 11:56:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.969 11:56:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.969 11:56:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.969 11:56:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.970 11:56:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.970 11:56:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.970 11:56:16 -- paths/export.sh@5 -- # export PATH 00:28:43.970 11:56:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.970 11:56:16 -- nvmf/common.sh@46 -- # : 0 00:28:43.970 11:56:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:43.970 11:56:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:43.970 11:56:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:43.970 11:56:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.970 11:56:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.970 11:56:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:43.970 11:56:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:43.970 11:56:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:43.970 11:56:16 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.970 11:56:16 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.970 11:56:16 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:43.970 11:56:16 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:43.970 11:56:16 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:43.970 11:56:16 -- host/timeout.sh@19 -- # nvmftestinit 00:28:43.970 11:56:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:43.970 11:56:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.970 11:56:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:43.970 11:56:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:43.970 11:56:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:43.970 11:56:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.970 11:56:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.970 11:56:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.970 11:56:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:43.970 11:56:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:43.970 11:56:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:43.970 11:56:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:43.970 11:56:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:43.970 11:56:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:43.970 11:56:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.970 11:56:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.970 11:56:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:43.970 11:56:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:43.970 11:56:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:43.970 11:56:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:43.970 11:56:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:43.970 11:56:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.970 11:56:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:43.970 11:56:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:43.970 11:56:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:43.970 11:56:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:43.970 11:56:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:43.970 11:56:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:43.970 Cannot find device "nvmf_tgt_br" 00:28:43.970 11:56:16 -- nvmf/common.sh@154 -- # true 00:28:43.970 11:56:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:43.970 Cannot find device "nvmf_tgt_br2" 00:28:43.970 11:56:16 -- nvmf/common.sh@155 -- # true 00:28:43.970 11:56:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:43.970 11:56:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:43.970 Cannot find device "nvmf_tgt_br" 00:28:43.970 11:56:16 -- nvmf/common.sh@157 -- # true 00:28:43.970 11:56:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:43.970 Cannot find device "nvmf_tgt_br2" 00:28:43.970 11:56:16 -- nvmf/common.sh@158 -- # true 00:28:43.970 11:56:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:43.970 11:56:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:44.231 11:56:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:44.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:44.231 11:56:17 -- nvmf/common.sh@161 -- # true 00:28:44.231 11:56:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:44.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:44.231 11:56:17 -- nvmf/common.sh@162 -- # true 00:28:44.231 11:56:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:44.231 11:56:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:44.231 11:56:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:44.231 11:56:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:44.231 11:56:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:44.231 11:56:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:44.231 11:56:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:44.231 11:56:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:44.231 11:56:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:44.231 11:56:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:44.231 11:56:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:44.231 11:56:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:44.231 11:56:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:44.231 11:56:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:44.231 11:56:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:44.231 11:56:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:44.231 11:56:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:44.231 11:56:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:44.231 11:56:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:44.231 11:56:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:44.231 11:56:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:44.231 11:56:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:44.231 11:56:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:44.231 11:56:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:44.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:28:44.231 00:28:44.231 --- 10.0.0.2 ping statistics --- 00:28:44.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.231 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:44.231 11:56:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:44.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:44.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.025 ms 00:28:44.231 00:28:44.231 --- 10.0.0.3 ping statistics --- 00:28:44.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.231 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:44.231 11:56:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:44.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:28:44.231 00:28:44.231 --- 10.0.0.1 ping statistics --- 00:28:44.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.231 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:28:44.231 11:56:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.231 11:56:17 -- nvmf/common.sh@421 -- # return 0 00:28:44.231 11:56:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:44.231 11:56:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.231 11:56:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:44.231 11:56:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:44.231 11:56:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.231 11:56:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:44.231 11:56:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:44.231 11:56:17 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:44.231 11:56:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:44.231 11:56:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:44.231 11:56:17 -- common/autotest_common.sh@10 -- # set +x 00:28:44.231 11:56:17 -- nvmf/common.sh@469 -- # nvmfpid=89822 00:28:44.231 11:56:17 -- nvmf/common.sh@470 -- # waitforlisten 89822 00:28:44.231 11:56:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:44.231 11:56:17 -- common/autotest_common.sh@829 -- # '[' -z 89822 ']' 00:28:44.231 11:56:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.231 11:56:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.231 11:56:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.231 11:56:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.231 11:56:17 -- common/autotest_common.sh@10 -- # set +x 00:28:44.231 [2024-11-20 11:56:17.228717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:44.231 [2024-11-20 11:56:17.228769] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.492 [2024-11-20 11:56:17.367038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:44.492 [2024-11-20 11:56:17.445400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:44.492 [2024-11-20 11:56:17.445539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.492 [2024-11-20 11:56:17.445546] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.492 [2024-11-20 11:56:17.445550] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.492 [2024-11-20 11:56:17.445787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.492 [2024-11-20 11:56:17.445790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.062 11:56:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.062 11:56:18 -- common/autotest_common.sh@862 -- # return 0 00:28:45.062 11:56:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:45.062 11:56:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:45.062 11:56:18 -- common/autotest_common.sh@10 -- # set +x 00:28:45.322 11:56:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.322 11:56:18 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.322 11:56:18 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:45.322 [2024-11-20 11:56:18.296020] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.322 11:56:18 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:45.582 Malloc0 00:28:45.582 11:56:18 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.842 11:56:18 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:46.102 11:56:18 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.102 [2024-11-20 11:56:19.076353] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.102 11:56:19 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:46.102 11:56:19 -- host/timeout.sh@32 -- # bdevperf_pid=89907 00:28:46.102 11:56:19 -- host/timeout.sh@34 -- # waitforlisten 89907 /var/tmp/bdevperf.sock 00:28:46.102 11:56:19 -- common/autotest_common.sh@829 -- # '[' -z 89907 ']' 00:28:46.102 11:56:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.102 11:56:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.102 11:56:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.102 11:56:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.102 11:56:19 -- common/autotest_common.sh@10 -- # set +x 00:28:46.102 [2024-11-20 11:56:19.130432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:46.102 [2024-11-20 11:56:19.130498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89907 ] 00:28:46.360 [2024-11-20 11:56:19.267728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.360 [2024-11-20 11:56:19.350246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.299 11:56:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.299 11:56:19 -- common/autotest_common.sh@862 -- # return 0 00:28:47.299 11:56:19 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:47.299 11:56:20 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:47.559 NVMe0n1 00:28:47.559 11:56:20 -- host/timeout.sh@51 -- # rpc_pid=89955 00:28:47.559 11:56:20 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:47.559 11:56:20 -- host/timeout.sh@53 -- # sleep 1 00:28:47.559 Running I/O for 10 seconds... 00:28:48.499 11:56:21 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.762 [2024-11-20 11:56:21.606981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.762 [2024-11-20 11:56:21.607781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.607813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.607852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.607929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.607971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300a40 is same with the state(5) to be set 00:28:48.763 [2024-11-20 11:56:21.608682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.608991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.608998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.609003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.609017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.609023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.609030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.609036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.609043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.609048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.609055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-11-20 11:56:21.609061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.763 [2024-11-20 11:56:21.609098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.764 [2024-11-20 11:56:21.609119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.764 [2024-11-20 11:56:21.609243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.764 [2024-11-20 11:56:21.609267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.764 [2024-11-20 11:56:21.609310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.764 [2024-11-20 11:56:21.609502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.764 [2024-11-20 11:56:21.609520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.764 [2024-11-20 11:56:21.609569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-11-20 11:56:21.609574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.609952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.609982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.609987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.610000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.610005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.610012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.610024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.610030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.765 [2024-11-20 11:56:21.610036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.610042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.610047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.765 [2024-11-20 11:56:21.610061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.765 [2024-11-20 11:56:21.610066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.766 [2024-11-20 11:56:21.610285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.766 [2024-11-20 11:56:21.610538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.766 [2024-11-20 11:56:21.610553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1145050 is same with the state(5) to be set 00:28:48.766 [2024-11-20 11:56:21.610561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:48.766 [2024-11-20 11:56:21.610567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:48.767 [2024-11-20 11:56:21.610572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26712 len:8 PRP1 0x0 PRP2 0x0 00:28:48.767 [2024-11-20 11:56:21.610577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.767 [2024-11-20 11:56:21.610630] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1145050 was disconnected and freed. reset controller. 00:28:48.767 [2024-11-20 11:56:21.610698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.767 [2024-11-20 11:56:21.610707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.767 [2024-11-20 11:56:21.610720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.767 [2024-11-20 11:56:21.610726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.767 [2024-11-20 11:56:21.610732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.767 [2024-11-20 11:56:21.610737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.767 [2024-11-20 11:56:21.610743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.767 [2024-11-20 11:56:21.610748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.767 [2024-11-20 11:56:21.610753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cfdc0 is same with the state(5) to be set 00:28:48.767 [2024-11-20 11:56:21.610941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.767 [2024-11-20 11:56:21.610962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cfdc0 (9): Bad file descriptor 00:28:48.767 [2024-11-20 11:56:21.611027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.767 [2024-11-20 11:56:21.611051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.767 [2024-11-20 11:56:21.611059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10cfdc0 with addr=10.0.0.2, port=4420 00:28:48.767 [2024-11-20 11:56:21.611065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cfdc0 is same with the state(5) to be set 00:28:48.767 [2024-11-20 11:56:21.611089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cfdc0 (9): Bad file descriptor 00:28:48.767 [2024-11-20 11:56:21.611099] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.767 [2024-11-20 11:56:21.611105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.767 [2024-11-20 11:56:21.611111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.767 [2024-11-20 11:56:21.611130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.767 [2024-11-20 11:56:21.611136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.767 11:56:21 -- host/timeout.sh@56 -- # sleep 2 00:28:50.677 [2024-11-20 11:56:23.607483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.678 [2024-11-20 11:56:23.607537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.678 [2024-11-20 11:56:23.607546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10cfdc0 with addr=10.0.0.2, port=4420 00:28:50.678 [2024-11-20 11:56:23.607556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cfdc0 is same with the state(5) to be set 00:28:50.678 [2024-11-20 11:56:23.607572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cfdc0 (9): Bad file descriptor 00:28:50.678 [2024-11-20 11:56:23.607584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.678 [2024-11-20 11:56:23.607589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.678 [2024-11-20 11:56:23.607596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.678 [2024-11-20 11:56:23.607613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.678 [2024-11-20 11:56:23.607620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.678 11:56:23 -- host/timeout.sh@57 -- # get_controller 00:28:50.678 11:56:23 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:50.678 11:56:23 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:50.937 11:56:23 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:50.937 11:56:23 -- host/timeout.sh@58 -- # get_bdev 00:28:50.937 11:56:23 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:50.937 11:56:23 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:51.197 11:56:24 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:51.197 11:56:24 -- host/timeout.sh@61 -- # sleep 5 00:28:52.578 [2024-11-20 11:56:25.603964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.578 [2024-11-20 11:56:25.604034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.578 [2024-11-20 11:56:25.604043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10cfdc0 with addr=10.0.0.2, port=4420 00:28:52.578 [2024-11-20 11:56:25.604053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cfdc0 is same with the state(5) to be set 00:28:52.578 [2024-11-20 11:56:25.604072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cfdc0 (9): Bad file descriptor 00:28:52.578 [2024-11-20 11:56:25.604084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.578 [2024-11-20 11:56:25.604090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.578 [2024-11-20 11:56:25.604097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.578 [2024-11-20 11:56:25.604122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.578 [2024-11-20 11:56:25.604129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.164 [2024-11-20 11:56:27.600371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.164 [2024-11-20 11:56:27.600403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.164 [2024-11-20 11:56:27.600410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.164 [2024-11-20 11:56:27.600417] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:55.164 [2024-11-20 11:56:27.600434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.734 00:28:55.734 Latency(us) 00:28:55.734 [2024-11-20T11:56:28.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.734 [2024-11-20T11:56:28.777Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:55.734 Verification LBA range: start 0x0 length 0x4000 00:28:55.734 NVMe0n1 : 8.10 2426.07 9.48 15.80 0.00 52468.44 1931.74 7033243.39 00:28:55.734 [2024-11-20T11:56:28.777Z] =================================================================================================================== 00:28:55.734 [2024-11-20T11:56:28.777Z] Total : 2426.07 9.48 15.80 0.00 52468.44 1931.74 7033243.39 00:28:55.734 0 00:28:56.305 11:56:29 -- host/timeout.sh@62 -- # get_controller 00:28:56.305 11:56:29 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:56.305 11:56:29 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:56.305 11:56:29 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:56.305 11:56:29 -- host/timeout.sh@63 -- # get_bdev 00:28:56.305 11:56:29 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:56.305 11:56:29 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:56.565 11:56:29 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:56.565 11:56:29 -- host/timeout.sh@65 -- # wait 89955 00:28:56.565 11:56:29 -- host/timeout.sh@67 -- # killprocess 89907 00:28:56.565 11:56:29 -- common/autotest_common.sh@936 -- # '[' -z 89907 ']' 00:28:56.565 11:56:29 -- common/autotest_common.sh@940 -- # kill -0 89907 00:28:56.565 11:56:29 -- common/autotest_common.sh@941 -- # uname 00:28:56.565 11:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:56.565 11:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89907 00:28:56.565 killing process with pid 89907 00:28:56.565 Received shutdown signal, test time was about 9.007961 seconds 00:28:56.565 00:28:56.565 Latency(us) 00:28:56.565 [2024-11-20T11:56:29.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.565 [2024-11-20T11:56:29.608Z] =================================================================================================================== 00:28:56.565 [2024-11-20T11:56:29.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.565 11:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:56.565 11:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:56.565 11:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89907' 00:28:56.565 11:56:29 -- common/autotest_common.sh@955 -- # kill 89907 00:28:56.565 11:56:29 -- common/autotest_common.sh@960 -- # wait 89907 00:28:56.824 11:56:29 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.083 [2024-11-20 11:56:29.887261] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.083 11:56:29 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:57.083 11:56:29 -- host/timeout.sh@74 -- # bdevperf_pid=90108 00:28:57.083 11:56:29 -- host/timeout.sh@76 -- # waitforlisten 90108 /var/tmp/bdevperf.sock 00:28:57.083 11:56:29 -- common/autotest_common.sh@829 -- # '[' -z 90108 ']' 00:28:57.083 11:56:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.083 11:56:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.083 11:56:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.083 11:56:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.083 11:56:29 -- common/autotest_common.sh@10 -- # set +x 00:28:57.083 [2024-11-20 11:56:29.939272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:57.083 [2024-11-20 11:56:29.939343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90108 ] 00:28:57.083 [2024-11-20 11:56:30.074569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.342 [2024-11-20 11:56:30.158285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.912 11:56:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.912 11:56:30 -- common/autotest_common.sh@862 -- # return 0 00:28:57.912 11:56:30 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:58.172 11:56:31 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:58.431 NVMe0n1 00:28:58.431 11:56:31 -- host/timeout.sh@84 -- # rpc_pid=90150 00:28:58.431 11:56:31 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:58.431 11:56:31 -- host/timeout.sh@86 -- # sleep 1 00:28:58.431 Running I/O for 10 seconds... 00:28:59.371 11:56:32 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.634 [2024-11-20 11:56:32.468530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.634 [2024-11-20 11:56:32.468767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.468840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edb70 is same with the state(5) to be set 00:28:59.635 [2024-11-20 11:56:32.469037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.635 [2024-11-20 11:56:32.469488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.635 [2024-11-20 11:56:32.469518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.635 [2024-11-20 11:56:32.469549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.635 [2024-11-20 11:56:32.469557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.635 [2024-11-20 11:56:32.469562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.469971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.469983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.469994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.470000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.470013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.470025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.470038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.470050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.470063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.636 [2024-11-20 11:56:32.470079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.470092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.636 [2024-11-20 11:56:32.470104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.636 [2024-11-20 11:56:32.470111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.637 [2024-11-20 11:56:32.470622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.637 [2024-11-20 11:56:32.470635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.637 [2024-11-20 11:56:32.470642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.638 [2024-11-20 11:56:32.470668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.638 [2024-11-20 11:56:32.470715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.638 [2024-11-20 11:56:32.470728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.638 [2024-11-20 11:56:32.470740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.638 [2024-11-20 11:56:32.470753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.638 [2024-11-20 11:56:32.470778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.638 [2024-11-20 11:56:32.470869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5b050 is same with the state(5) to be set 00:28:59.638 [2024-11-20 11:56:32.470889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:59.638 [2024-11-20 11:56:32.470893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:59.638 [2024-11-20 11:56:32.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28888 len:8 PRP1 0x0 PRP2 0x0 00:28:59.638 [2024-11-20 11:56:32.470905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.638 [2024-11-20 11:56:32.470946] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e5b050 was disconnected and freed. reset controller. 00:28:59.638 [2024-11-20 11:56:32.471155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.638 [2024-11-20 11:56:32.471217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:28:59.638 [2024-11-20 11:56:32.471284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.638 [2024-11-20 11:56:32.471308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.638 [2024-11-20 11:56:32.471348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de5dc0 with addr=10.0.0.2, port=4420 00:28:59.638 [2024-11-20 11:56:32.471355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:28:59.638 [2024-11-20 11:56:32.471366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:28:59.638 [2024-11-20 11:56:32.471376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.638 [2024-11-20 11:56:32.471381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.638 [2024-11-20 11:56:32.471388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.638 [2024-11-20 11:56:32.471402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.638 [2024-11-20 11:56:32.471407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.638 11:56:32 -- host/timeout.sh@90 -- # sleep 1 00:29:00.577 [2024-11-20 11:56:33.469567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.577 [2024-11-20 11:56:33.469617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.577 [2024-11-20 11:56:33.469626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de5dc0 with addr=10.0.0.2, port=4420 00:29:00.577 [2024-11-20 11:56:33.469634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:29:00.577 [2024-11-20 11:56:33.469649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:29:00.577 [2024-11-20 11:56:33.469669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.577 [2024-11-20 11:56:33.469674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.577 [2024-11-20 11:56:33.469681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.577 [2024-11-20 11:56:33.469698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.577 [2024-11-20 11:56:33.469704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.577 11:56:33 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.837 [2024-11-20 11:56:33.680368] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.837 11:56:33 -- host/timeout.sh@92 -- # wait 90150 00:29:01.776 [2024-11-20 11:56:34.486156] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.354 00:29:08.354 Latency(us) 00:29:08.354 [2024-11-20T11:56:41.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.354 [2024-11-20T11:56:41.397Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:08.354 Verification LBA range: start 0x0 length 0x4000 00:29:08.354 NVMe0n1 : 10.00 12816.52 50.06 0.00 0.00 9974.74 880.01 3018433.62 00:29:08.354 [2024-11-20T11:56:41.397Z] =================================================================================================================== 00:29:08.354 [2024-11-20T11:56:41.397Z] Total : 12816.52 50.06 0.00 0.00 9974.74 880.01 3018433.62 00:29:08.354 0 00:29:08.354 11:56:41 -- host/timeout.sh@97 -- # rpc_pid=90272 00:29:08.354 11:56:41 -- host/timeout.sh@98 -- # sleep 1 00:29:08.354 11:56:41 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:08.614 Running I/O for 10 seconds... 00:29:09.557 11:56:42 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.557 [2024-11-20 11:56:42.555520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.557 [2024-11-20 11:56:42.555564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.557 [2024-11-20 11:56:42.555571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.557 [2024-11-20 11:56:42.555576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ac70 is same with the state(5) to be set 00:29:09.558 [2024-11-20 11:56:42.555969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.558 [2024-11-20 11:56:42.556315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.558 [2024-11-20 11:56:42.556321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.559 [2024-11-20 11:56:42.556751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.559 [2024-11-20 11:56:42.556807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.559 [2024-11-20 11:56:42.556813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.556873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.556897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.556910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.556935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.556959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.556990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.556997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.560 [2024-11-20 11:56:42.557258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.560 [2024-11-20 11:56:42.557292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.560 [2024-11-20 11:56:42.557297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.561 [2024-11-20 11:56:42.557592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.561 [2024-11-20 11:56:42.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f90 is same with the state(5) to be set 00:29:09.561 [2024-11-20 11:56:42.557713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:09.561 [2024-11-20 11:56:42.557717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:09.561 [2024-11-20 11:56:42.557722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27664 len:8 PRP1 0x0 PRP2 0x0 00:29:09.561 [2024-11-20 11:56:42.557727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.561 [2024-11-20 11:56:42.557767] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e56f90 was disconnected and freed. reset controller. 00:29:09.561 [2024-11-20 11:56:42.557828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.562 [2024-11-20 11:56:42.557836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.562 [2024-11-20 11:56:42.557843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.562 [2024-11-20 11:56:42.557848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.562 [2024-11-20 11:56:42.557857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.562 [2024-11-20 11:56:42.557862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.562 [2024-11-20 11:56:42.557868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.562 [2024-11-20 11:56:42.557873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.562 [2024-11-20 11:56:42.557878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:29:09.562 [2024-11-20 11:56:42.558044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.562 [2024-11-20 11:56:42.558056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:29:09.562 [2024-11-20 11:56:42.558124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.562 [2024-11-20 11:56:42.558148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.562 [2024-11-20 11:56:42.558156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de5dc0 with addr=10.0.0.2, port=4420 00:29:09.562 [2024-11-20 11:56:42.558163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:29:09.562 [2024-11-20 11:56:42.558173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:29:09.562 [2024-11-20 11:56:42.558182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.562 [2024-11-20 11:56:42.558188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.562 [2024-11-20 11:56:42.558196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.562 [2024-11-20 11:56:42.582672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.562 [2024-11-20 11:56:42.582704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.562 11:56:42 -- host/timeout.sh@101 -- # sleep 3 00:29:10.542 [2024-11-20 11:56:43.580878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.542 [2024-11-20 11:56:43.580941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.542 [2024-11-20 11:56:43.580950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de5dc0 with addr=10.0.0.2, port=4420 00:29:10.542 [2024-11-20 11:56:43.580959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:29:10.542 [2024-11-20 11:56:43.580974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:29:10.542 [2024-11-20 11:56:43.580986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.542 [2024-11-20 11:56:43.580991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.542 [2024-11-20 11:56:43.580998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.542 [2024-11-20 11:56:43.581014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.542 [2024-11-20 11:56:43.581020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.922 [2024-11-20 11:56:44.579167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.922 [2024-11-20 11:56:44.579224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.922 [2024-11-20 11:56:44.579234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de5dc0 with addr=10.0.0.2, port=4420 00:29:11.922 [2024-11-20 11:56:44.579242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:29:11.922 [2024-11-20 11:56:44.579254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:29:11.922 [2024-11-20 11:56:44.579265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.922 [2024-11-20 11:56:44.579271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.922 [2024-11-20 11:56:44.579276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.922 [2024-11-20 11:56:44.579290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.922 [2024-11-20 11:56:44.579296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.863 [2024-11-20 11:56:45.577538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-20 11:56:45.577600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-20 11:56:45.577609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de5dc0 with addr=10.0.0.2, port=4420 00:29:12.863 [2024-11-20 11:56:45.577617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5dc0 is same with the state(5) to be set 00:29:12.863 [2024-11-20 11:56:45.577730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5dc0 (9): Bad file descriptor 00:29:12.863 [2024-11-20 11:56:45.577839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.863 [2024-11-20 11:56:45.577845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.863 [2024-11-20 11:56:45.577852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.863 [2024-11-20 11:56:45.579545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.863 [2024-11-20 11:56:45.579555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.863 11:56:45 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.863 [2024-11-20 11:56:45.765688] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.863 11:56:45 -- host/timeout.sh@103 -- # wait 90272 00:29:13.803 [2024-11-20 11:56:46.594752] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:19.083 00:29:19.083 Latency(us) 00:29:19.083 [2024-11-20T11:56:52.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.083 [2024-11-20T11:56:52.126Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:19.083 Verification LBA range: start 0x0 length 0x4000 00:29:19.083 NVMe0n1 : 10.00 9686.10 37.84 8694.07 0.00 6955.41 568.79 3018433.62 00:29:19.083 [2024-11-20T11:56:52.126Z] =================================================================================================================== 00:29:19.083 [2024-11-20T11:56:52.126Z] Total : 9686.10 37.84 8694.07 0.00 6955.41 0.00 3018433.62 00:29:19.083 0 00:29:19.083 11:56:51 -- host/timeout.sh@105 -- # killprocess 90108 00:29:19.083 11:56:51 -- common/autotest_common.sh@936 -- # '[' -z 90108 ']' 00:29:19.083 11:56:51 -- common/autotest_common.sh@940 -- # kill -0 90108 00:29:19.083 11:56:51 -- common/autotest_common.sh@941 -- # uname 00:29:19.083 11:56:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.083 11:56:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90108 00:29:19.083 11:56:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:19.083 11:56:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:19.083 killing process with pid 90108 00:29:19.083 11:56:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90108' 00:29:19.083 11:56:51 -- common/autotest_common.sh@955 -- # kill 90108 00:29:19.083 Received shutdown signal, test time was about 10.000000 seconds 00:29:19.083 00:29:19.083 Latency(us) 00:29:19.083 [2024-11-20T11:56:52.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.083 [2024-11-20T11:56:52.126Z] =================================================================================================================== 00:29:19.083 [2024-11-20T11:56:52.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.083 11:56:51 -- common/autotest_common.sh@960 -- # wait 90108 00:29:19.083 11:56:51 -- host/timeout.sh@110 -- # bdevperf_pid=90397 00:29:19.083 11:56:51 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:19.083 11:56:51 -- host/timeout.sh@112 -- # waitforlisten 90397 /var/tmp/bdevperf.sock 00:29:19.083 11:56:51 -- common/autotest_common.sh@829 -- # '[' -z 90397 ']' 00:29:19.083 11:56:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.083 11:56:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.083 11:56:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.083 11:56:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.083 11:56:51 -- common/autotest_common.sh@10 -- # set +x 00:29:19.083 [2024-11-20 11:56:51.783640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:19.083 [2024-11-20 11:56:51.783729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90397 ] 00:29:19.083 [2024-11-20 11:56:51.905226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.083 [2024-11-20 11:56:51.985878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.654 11:56:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.654 11:56:52 -- common/autotest_common.sh@862 -- # return 0 00:29:19.654 11:56:52 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90397 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:19.654 11:56:52 -- host/timeout.sh@116 -- # dtrace_pid=90421 00:29:19.654 11:56:52 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:19.914 11:56:52 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:20.175 NVMe0n1 00:29:20.175 11:56:53 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:20.175 11:56:53 -- host/timeout.sh@124 -- # rpc_pid=90480 00:29:20.175 11:56:53 -- host/timeout.sh@125 -- # sleep 1 00:29:20.175 Running I/O for 10 seconds... 00:29:21.116 11:56:54 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.396 [2024-11-20 11:56:54.290012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.396 [2024-11-20 11:56:54.290252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290257] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134e400 is same with the state(5) to be set 00:29:21.397 [2024-11-20 11:56:54.290765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.397 [2024-11-20 11:56:54.290936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.397 [2024-11-20 11:56:54.290949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.290956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.290962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.290969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.290974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.290981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.290987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.290999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.398 [2024-11-20 11:56:54.291561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.398 [2024-11-20 11:56:54.291568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.291986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.291991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.399 [2024-11-20 11:56:54.292171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-20 11:56:54.292184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.400 [2024-11-20 11:56:54.292725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.400 [2024-11-20 11:56:54.292730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.401 [2024-11-20 11:56:54.292744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.401 [2024-11-20 11:56:54.292750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.401 [2024-11-20 11:56:54.292757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262050 is same with the state(5) to be set 00:29:21.401 [2024-11-20 11:56:54.292771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.401 [2024-11-20 11:56:54.292776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.401 [2024-11-20 11:56:54.292781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115760 len:8 PRP1 0x0 PRP2 0x0 00:29:21.401 [2024-11-20 11:56:54.292786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.401 [2024-11-20 11:56:54.292843] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1262050 was disconnected and freed. reset controller. 00:29:21.401 [2024-11-20 11:56:54.293075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.401 [2024-11-20 11:56:54.293142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ecdc0 (9): Bad file descriptor 00:29:21.401 [2024-11-20 11:56:54.293214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.401 [2024-11-20 11:56:54.293237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.401 [2024-11-20 11:56:54.293245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ecdc0 with addr=10.0.0.2, port=4420 00:29:21.401 [2024-11-20 11:56:54.293252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ecdc0 is same with the state(5) to be set 00:29:21.401 [2024-11-20 11:56:54.293263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ecdc0 (9): Bad file descriptor 00:29:21.401 [2024-11-20 11:56:54.293272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.401 [2024-11-20 11:56:54.293277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.401 [2024-11-20 11:56:54.293283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.401 [2024-11-20 11:56:54.293296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.401 [2024-11-20 11:56:54.293302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.401 11:56:54 -- host/timeout.sh@128 -- # wait 90480 00:29:23.320 [2024-11-20 11:56:56.289612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-20 11:56:56.289673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-20 11:56:56.289682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ecdc0 with addr=10.0.0.2, port=4420 00:29:23.320 [2024-11-20 11:56:56.289691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ecdc0 is same with the state(5) to be set 00:29:23.320 [2024-11-20 11:56:56.289724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ecdc0 (9): Bad file descriptor 00:29:23.320 [2024-11-20 11:56:56.289736] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:23.320 [2024-11-20 11:56:56.289742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:23.320 [2024-11-20 11:56:56.289748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:23.320 [2024-11-20 11:56:56.289766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:23.320 [2024-11-20 11:56:56.289773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.859 [2024-11-20 11:56:58.286077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.859 [2024-11-20 11:56:58.286126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.859 [2024-11-20 11:56:58.286135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ecdc0 with addr=10.0.0.2, port=4420 00:29:25.859 [2024-11-20 11:56:58.286143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ecdc0 is same with the state(5) to be set 00:29:25.859 [2024-11-20 11:56:58.286161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ecdc0 (9): Bad file descriptor 00:29:25.859 [2024-11-20 11:56:58.286173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.859 [2024-11-20 11:56:58.286178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.859 [2024-11-20 11:56:58.286184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.859 [2024-11-20 11:56:58.286201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.859 [2024-11-20 11:56:58.286207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.767 [2024-11-20 11:57:00.282458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.767 [2024-11-20 11:57:00.282503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.767 [2024-11-20 11:57:00.282509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.767 [2024-11-20 11:57:00.282516] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:27.767 [2024-11-20 11:57:00.282533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.338 00:29:28.338 Latency(us) 00:29:28.338 [2024-11-20T11:57:01.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.338 [2024-11-20T11:57:01.381Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:28.338 NVMe0n1 : 8.11 3110.92 12.15 15.78 0.00 40989.96 1667.02 7033243.39 00:29:28.338 [2024-11-20T11:57:01.381Z] =================================================================================================================== 00:29:28.338 [2024-11-20T11:57:01.381Z] Total : 3110.92 12.15 15.78 0.00 40989.96 1667.02 7033243.39 00:29:28.338 0 00:29:28.338 11:57:01 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:28.338 Attaching 5 probes... 00:29:28.338 1138.444279: reset bdev controller NVMe0 00:29:28.338 1138.546500: reconnect bdev controller NVMe0 00:29:28.338 3134.888323: reconnect delay bdev controller NVMe0 00:29:28.338 3134.904043: reconnect bdev controller NVMe0 00:29:28.338 5131.359041: reconnect delay bdev controller NVMe0 00:29:28.338 5131.374030: reconnect bdev controller NVMe0 00:29:28.338 7127.814911: reconnect delay bdev controller NVMe0 00:29:28.338 7127.831237: reconnect bdev controller NVMe0 00:29:28.338 11:57:01 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:28.338 11:57:01 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:28.338 11:57:01 -- host/timeout.sh@136 -- # kill 90421 00:29:28.338 11:57:01 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:28.338 11:57:01 -- host/timeout.sh@139 -- # killprocess 90397 00:29:28.338 11:57:01 -- common/autotest_common.sh@936 -- # '[' -z 90397 ']' 00:29:28.338 11:57:01 -- common/autotest_common.sh@940 -- # kill -0 90397 00:29:28.338 11:57:01 -- common/autotest_common.sh@941 -- # uname 00:29:28.338 11:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:28.338 11:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90397 00:29:28.338 killing process with pid 90397 00:29:28.338 Received shutdown signal, test time was about 8.199067 seconds 00:29:28.338 00:29:28.338 Latency(us) 00:29:28.338 [2024-11-20T11:57:01.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.338 [2024-11-20T11:57:01.381Z] =================================================================================================================== 00:29:28.338 [2024-11-20T11:57:01.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.338 11:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:28.338 11:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:28.338 11:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90397' 00:29:28.338 11:57:01 -- common/autotest_common.sh@955 -- # kill 90397 00:29:28.338 11:57:01 -- common/autotest_common.sh@960 -- # wait 90397 00:29:28.598 11:57:01 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.858 11:57:01 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:28.858 11:57:01 -- host/timeout.sh@145 -- # nvmftestfini 00:29:28.858 11:57:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:28.858 11:57:01 -- nvmf/common.sh@116 -- # sync 00:29:28.858 11:57:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:28.858 11:57:01 -- nvmf/common.sh@119 -- # set +e 00:29:28.858 11:57:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:28.858 11:57:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:28.858 rmmod nvme_tcp 00:29:28.858 rmmod nvme_fabrics 00:29:28.858 rmmod nvme_keyring 00:29:28.858 11:57:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:28.858 11:57:01 -- nvmf/common.sh@123 -- # set -e 00:29:28.858 11:57:01 -- nvmf/common.sh@124 -- # return 0 00:29:28.858 11:57:01 -- nvmf/common.sh@477 -- # '[' -n 89822 ']' 00:29:28.858 11:57:01 -- nvmf/common.sh@478 -- # killprocess 89822 00:29:28.858 11:57:01 -- common/autotest_common.sh@936 -- # '[' -z 89822 ']' 00:29:28.858 11:57:01 -- common/autotest_common.sh@940 -- # kill -0 89822 00:29:28.858 11:57:01 -- common/autotest_common.sh@941 -- # uname 00:29:28.858 11:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:28.858 11:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89822 00:29:29.118 killing process with pid 89822 00:29:29.118 11:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:29.118 11:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:29.118 11:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89822' 00:29:29.118 11:57:01 -- common/autotest_common.sh@955 -- # kill 89822 00:29:29.118 11:57:01 -- common/autotest_common.sh@960 -- # wait 89822 00:29:29.378 11:57:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:29.378 11:57:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:29.378 11:57:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:29.378 11:57:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.378 11:57:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:29.378 11:57:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.378 11:57:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.378 11:57:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.379 11:57:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:29.379 00:29:29.379 real 0m45.591s 00:29:29.379 user 2m13.121s 00:29:29.379 sys 0m4.752s 00:29:29.379 11:57:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:29.379 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.379 ************************************ 00:29:29.379 END TEST nvmf_timeout 00:29:29.379 ************************************ 00:29:29.379 11:57:02 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:29:29.379 11:57:02 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:29.379 11:57:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.379 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.379 11:57:02 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:29.379 00:29:29.379 real 18m18.779s 00:29:29.379 user 58m46.542s 00:29:29.379 sys 3m39.885s 00:29:29.379 11:57:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:29.379 ************************************ 00:29:29.379 END TEST nvmf_tcp 00:29:29.379 ************************************ 00:29:29.379 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.379 11:57:02 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:29:29.379 11:57:02 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:29.379 11:57:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:29.379 11:57:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:29.379 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.379 ************************************ 00:29:29.379 START TEST spdkcli_nvmf_tcp 00:29:29.379 ************************************ 00:29:29.379 11:57:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:29.639 * Looking for test storage... 00:29:29.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:29.639 11:57:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:29.639 11:57:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:29.639 11:57:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:29.639 11:57:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:29.639 11:57:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:29.639 11:57:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:29.639 11:57:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:29.639 11:57:02 -- scripts/common.sh@335 -- # IFS=.-: 00:29:29.639 11:57:02 -- scripts/common.sh@335 -- # read -ra ver1 00:29:29.639 11:57:02 -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.639 11:57:02 -- scripts/common.sh@336 -- # read -ra ver2 00:29:29.639 11:57:02 -- scripts/common.sh@337 -- # local 'op=<' 00:29:29.640 11:57:02 -- scripts/common.sh@339 -- # ver1_l=2 00:29:29.640 11:57:02 -- scripts/common.sh@340 -- # ver2_l=1 00:29:29.640 11:57:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:29.640 11:57:02 -- scripts/common.sh@343 -- # case "$op" in 00:29:29.640 11:57:02 -- scripts/common.sh@344 -- # : 1 00:29:29.640 11:57:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:29.640 11:57:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.640 11:57:02 -- scripts/common.sh@364 -- # decimal 1 00:29:29.640 11:57:02 -- scripts/common.sh@352 -- # local d=1 00:29:29.640 11:57:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.640 11:57:02 -- scripts/common.sh@354 -- # echo 1 00:29:29.640 11:57:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:29.640 11:57:02 -- scripts/common.sh@365 -- # decimal 2 00:29:29.640 11:57:02 -- scripts/common.sh@352 -- # local d=2 00:29:29.640 11:57:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.640 11:57:02 -- scripts/common.sh@354 -- # echo 2 00:29:29.640 11:57:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:29.640 11:57:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:29.640 11:57:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:29.640 11:57:02 -- scripts/common.sh@367 -- # return 0 00:29:29.640 11:57:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.640 11:57:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 11:57:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 11:57:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 11:57:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:29.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.640 --rc genhtml_branch_coverage=1 00:29:29.640 --rc genhtml_function_coverage=1 00:29:29.640 --rc genhtml_legend=1 00:29:29.640 --rc geninfo_all_blocks=1 00:29:29.640 --rc geninfo_unexecuted_blocks=1 00:29:29.640 00:29:29.640 ' 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:29.640 11:57:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:29.640 11:57:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:29.640 11:57:02 -- nvmf/common.sh@7 -- # uname -s 00:29:29.640 11:57:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.640 11:57:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.640 11:57:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.640 11:57:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.640 11:57:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.640 11:57:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.640 11:57:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.640 11:57:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.640 11:57:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.640 11:57:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.640 11:57:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:29:29.640 11:57:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:29:29.640 11:57:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.640 11:57:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.640 11:57:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:29.640 11:57:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:29.640 11:57:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.640 11:57:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.640 11:57:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.640 11:57:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.640 11:57:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.640 11:57:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.640 11:57:02 -- paths/export.sh@5 -- # export PATH 00:29:29.640 11:57:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.640 11:57:02 -- nvmf/common.sh@46 -- # : 0 00:29:29.640 11:57:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:29.640 11:57:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:29.640 11:57:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:29.640 11:57:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.640 11:57:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.640 11:57:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:29.640 11:57:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:29.640 11:57:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:29.640 11:57:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:29.640 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.640 11:57:02 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:29.640 11:57:02 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90708 00:29:29.640 11:57:02 -- spdkcli/common.sh@34 -- # waitforlisten 90708 00:29:29.640 11:57:02 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:29.640 11:57:02 -- common/autotest_common.sh@829 -- # '[' -z 90708 ']' 00:29:29.640 11:57:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.640 11:57:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.640 11:57:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.640 11:57:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.640 11:57:02 -- common/autotest_common.sh@10 -- # set +x 00:29:29.901 [2024-11-20 11:57:02.725271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:29.901 [2024-11-20 11:57:02.725343] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90708 ] 00:29:29.901 [2024-11-20 11:57:02.861993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:29.901 [2024-11-20 11:57:02.940121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:30.160 [2024-11-20 11:57:02.940546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.160 [2024-11-20 11:57:02.940550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.730 11:57:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.730 11:57:03 -- common/autotest_common.sh@862 -- # return 0 00:29:30.730 11:57:03 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:30.730 11:57:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:30.730 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:29:30.730 11:57:03 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:30.730 11:57:03 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:30.730 11:57:03 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:30.730 11:57:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:30.730 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:29:30.730 11:57:03 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:30.730 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:30.730 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:30.730 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:30.730 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:30.730 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:30.730 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:30.730 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:30.730 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:30.730 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:30.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:30.730 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:30.730 ' 00:29:30.990 [2024-11-20 11:57:03.991630] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:33.551 [2024-11-20 11:57:06.322107] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.932 [2024-11-20 11:57:07.660271] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:37.479 [2024-11-20 11:57:10.140715] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:39.388 [2024-11-20 11:57:12.284503] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:41.295 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:41.295 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:41.295 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:41.295 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:41.295 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:41.295 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:41.295 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:41.295 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:41.295 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:41.295 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:41.295 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:41.295 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:41.296 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:41.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:41.296 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:41.296 11:57:14 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:41.296 11:57:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:41.296 11:57:14 -- common/autotest_common.sh@10 -- # set +x 00:29:41.296 11:57:14 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:41.296 11:57:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:41.296 11:57:14 -- common/autotest_common.sh@10 -- # set +x 00:29:41.296 11:57:14 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:41.296 11:57:14 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:41.555 11:57:14 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:41.555 11:57:14 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:41.555 11:57:14 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:41.555 11:57:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:41.555 11:57:14 -- common/autotest_common.sh@10 -- # set +x 00:29:41.555 11:57:14 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:41.555 11:57:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:41.555 11:57:14 -- common/autotest_common.sh@10 -- # set +x 00:29:41.555 11:57:14 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:41.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:41.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:41.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:41.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:41.555 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:41.555 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:41.555 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:41.555 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:41.555 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:41.555 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:41.555 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:41.555 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:41.555 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:41.555 ' 00:29:48.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:48.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:48.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:48.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:48.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:48.152 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:48.152 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:48.152 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:48.152 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:48.152 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:48.152 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:48.152 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:48.152 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:48.152 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:48.152 11:57:20 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:48.152 11:57:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.152 11:57:20 -- common/autotest_common.sh@10 -- # set +x 00:29:48.152 11:57:20 -- spdkcli/nvmf.sh@90 -- # killprocess 90708 00:29:48.152 11:57:20 -- common/autotest_common.sh@936 -- # '[' -z 90708 ']' 00:29:48.152 11:57:20 -- common/autotest_common.sh@940 -- # kill -0 90708 00:29:48.152 11:57:20 -- common/autotest_common.sh@941 -- # uname 00:29:48.152 11:57:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:48.152 11:57:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90708 00:29:48.152 11:57:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:48.152 11:57:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:48.152 11:57:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90708' 00:29:48.152 killing process with pid 90708 00:29:48.152 11:57:20 -- common/autotest_common.sh@955 -- # kill 90708 00:29:48.152 [2024-11-20 11:57:20.286615] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:48.152 11:57:20 -- common/autotest_common.sh@960 -- # wait 90708 00:29:48.152 11:57:20 -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:48.152 11:57:20 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:48.152 11:57:20 -- spdkcli/common.sh@13 -- # '[' -n 90708 ']' 00:29:48.152 11:57:20 -- spdkcli/common.sh@14 -- # killprocess 90708 00:29:48.153 11:57:20 -- common/autotest_common.sh@936 -- # '[' -z 90708 ']' 00:29:48.153 11:57:20 -- common/autotest_common.sh@940 -- # kill -0 90708 00:29:48.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (90708) - No such process 00:29:48.153 Process with pid 90708 is not found 00:29:48.153 11:57:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 90708 is not found' 00:29:48.153 11:57:20 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:48.153 11:57:20 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:48.153 11:57:20 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:48.153 00:29:48.153 real 0m18.104s 00:29:48.153 user 0m39.712s 00:29:48.153 sys 0m0.958s 00:29:48.153 11:57:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:48.153 11:57:20 -- common/autotest_common.sh@10 -- # set +x 00:29:48.153 ************************************ 00:29:48.153 END TEST spdkcli_nvmf_tcp 00:29:48.153 ************************************ 00:29:48.153 11:57:20 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:48.153 11:57:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:48.153 11:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:48.153 11:57:20 -- common/autotest_common.sh@10 -- # set +x 00:29:48.153 ************************************ 00:29:48.153 START TEST nvmf_identify_passthru 00:29:48.153 ************************************ 00:29:48.153 11:57:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:48.153 * Looking for test storage... 00:29:48.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:48.153 11:57:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:48.153 11:57:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:48.153 11:57:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:48.153 11:57:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:48.153 11:57:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:48.153 11:57:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:48.153 11:57:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:48.153 11:57:20 -- scripts/common.sh@335 -- # IFS=.-: 00:29:48.153 11:57:20 -- scripts/common.sh@335 -- # read -ra ver1 00:29:48.153 11:57:20 -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.153 11:57:20 -- scripts/common.sh@336 -- # read -ra ver2 00:29:48.153 11:57:20 -- scripts/common.sh@337 -- # local 'op=<' 00:29:48.153 11:57:20 -- scripts/common.sh@339 -- # ver1_l=2 00:29:48.153 11:57:20 -- scripts/common.sh@340 -- # ver2_l=1 00:29:48.153 11:57:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:48.153 11:57:20 -- scripts/common.sh@343 -- # case "$op" in 00:29:48.153 11:57:20 -- scripts/common.sh@344 -- # : 1 00:29:48.153 11:57:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:48.153 11:57:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.153 11:57:20 -- scripts/common.sh@364 -- # decimal 1 00:29:48.153 11:57:20 -- scripts/common.sh@352 -- # local d=1 00:29:48.153 11:57:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.153 11:57:20 -- scripts/common.sh@354 -- # echo 1 00:29:48.153 11:57:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:48.153 11:57:20 -- scripts/common.sh@365 -- # decimal 2 00:29:48.153 11:57:20 -- scripts/common.sh@352 -- # local d=2 00:29:48.153 11:57:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.153 11:57:20 -- scripts/common.sh@354 -- # echo 2 00:29:48.153 11:57:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:48.153 11:57:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:48.153 11:57:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:48.153 11:57:20 -- scripts/common.sh@367 -- # return 0 00:29:48.153 11:57:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.153 11:57:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.153 --rc genhtml_branch_coverage=1 00:29:48.153 --rc genhtml_function_coverage=1 00:29:48.153 --rc genhtml_legend=1 00:29:48.153 --rc geninfo_all_blocks=1 00:29:48.153 --rc geninfo_unexecuted_blocks=1 00:29:48.153 00:29:48.153 ' 00:29:48.153 11:57:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.153 --rc genhtml_branch_coverage=1 00:29:48.153 --rc genhtml_function_coverage=1 00:29:48.153 --rc genhtml_legend=1 00:29:48.153 --rc geninfo_all_blocks=1 00:29:48.153 --rc geninfo_unexecuted_blocks=1 00:29:48.153 00:29:48.153 ' 00:29:48.153 11:57:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.153 --rc genhtml_branch_coverage=1 00:29:48.153 --rc genhtml_function_coverage=1 00:29:48.153 --rc genhtml_legend=1 00:29:48.153 --rc geninfo_all_blocks=1 00:29:48.153 --rc geninfo_unexecuted_blocks=1 00:29:48.153 00:29:48.153 ' 00:29:48.153 11:57:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.153 --rc genhtml_branch_coverage=1 00:29:48.153 --rc genhtml_function_coverage=1 00:29:48.153 --rc genhtml_legend=1 00:29:48.153 --rc geninfo_all_blocks=1 00:29:48.153 --rc geninfo_unexecuted_blocks=1 00:29:48.153 00:29:48.153 ' 00:29:48.153 11:57:20 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:48.153 11:57:20 -- nvmf/common.sh@7 -- # uname -s 00:29:48.153 11:57:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.153 11:57:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.153 11:57:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.153 11:57:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.153 11:57:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.153 11:57:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.153 11:57:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.153 11:57:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.153 11:57:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.153 11:57:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.153 11:57:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:29:48.153 11:57:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:29:48.153 11:57:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.153 11:57:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.153 11:57:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:48.153 11:57:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:48.153 11:57:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.153 11:57:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.153 11:57:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.153 11:57:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- paths/export.sh@5 -- # export PATH 00:29:48.153 11:57:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- nvmf/common.sh@46 -- # : 0 00:29:48.153 11:57:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:48.153 11:57:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:48.153 11:57:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:48.153 11:57:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.153 11:57:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.153 11:57:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:48.153 11:57:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:48.153 11:57:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:48.153 11:57:20 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:48.153 11:57:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.153 11:57:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.153 11:57:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.153 11:57:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.153 11:57:20 -- paths/export.sh@5 -- # export PATH 00:29:48.154 11:57:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.154 11:57:20 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:48.154 11:57:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:48.154 11:57:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.154 11:57:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:48.154 11:57:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:48.154 11:57:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:48.154 11:57:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.154 11:57:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:48.154 11:57:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.154 11:57:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:48.154 11:57:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:48.154 11:57:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:48.154 11:57:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:48.154 11:57:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:48.154 11:57:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:48.154 11:57:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.154 11:57:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.154 11:57:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:48.154 11:57:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:48.154 11:57:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:48.154 11:57:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:48.154 11:57:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:48.154 11:57:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.154 11:57:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:48.154 11:57:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:48.154 11:57:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:48.154 11:57:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:48.154 11:57:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:48.154 11:57:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:48.154 Cannot find device "nvmf_tgt_br" 00:29:48.154 11:57:20 -- nvmf/common.sh@154 -- # true 00:29:48.154 11:57:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:48.154 Cannot find device "nvmf_tgt_br2" 00:29:48.154 11:57:20 -- nvmf/common.sh@155 -- # true 00:29:48.154 11:57:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:48.154 11:57:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:48.154 Cannot find device "nvmf_tgt_br" 00:29:48.154 11:57:20 -- nvmf/common.sh@157 -- # true 00:29:48.154 11:57:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:48.154 Cannot find device "nvmf_tgt_br2" 00:29:48.154 11:57:20 -- nvmf/common.sh@158 -- # true 00:29:48.154 11:57:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:48.154 11:57:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:48.154 11:57:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:48.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:48.154 11:57:20 -- nvmf/common.sh@161 -- # true 00:29:48.154 11:57:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:48.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:48.154 11:57:20 -- nvmf/common.sh@162 -- # true 00:29:48.154 11:57:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:48.154 11:57:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:48.154 11:57:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:48.154 11:57:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:48.154 11:57:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:48.154 11:57:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:48.154 11:57:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:48.154 11:57:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:48.154 11:57:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:48.154 11:57:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:48.154 11:57:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:48.154 11:57:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:48.154 11:57:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:48.154 11:57:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:48.154 11:57:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:48.154 11:57:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:48.154 11:57:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:48.154 11:57:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:48.154 11:57:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:48.154 11:57:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:48.154 11:57:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:48.154 11:57:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:48.154 11:57:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:48.154 11:57:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:48.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:29:48.154 00:29:48.154 --- 10.0.0.2 ping statistics --- 00:29:48.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.154 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:48.154 11:57:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:48.154 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:48.154 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:29:48.154 00:29:48.154 --- 10.0.0.3 ping statistics --- 00:29:48.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.154 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:29:48.154 11:57:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:48.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:29:48.154 00:29:48.154 --- 10.0.0.1 ping statistics --- 00:29:48.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.154 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:29:48.154 11:57:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.154 11:57:21 -- nvmf/common.sh@421 -- # return 0 00:29:48.154 11:57:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:48.154 11:57:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.154 11:57:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:48.154 11:57:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:48.154 11:57:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.154 11:57:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:48.154 11:57:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:48.154 11:57:21 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:48.154 11:57:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:48.154 11:57:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.154 11:57:21 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:48.154 11:57:21 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:48.154 11:57:21 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:48.154 11:57:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:48.154 11:57:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:48.154 11:57:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:48.154 11:57:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:48.154 11:57:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:48.154 11:57:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:48.154 11:57:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:48.414 11:57:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:29:48.414 11:57:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:29:48.414 11:57:21 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:48.414 11:57:21 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:29:48.414 11:57:21 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:29:48.414 11:57:21 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:29:48.414 11:57:21 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:48.414 11:57:21 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:48.414 11:57:21 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:48.414 11:57:21 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:29:48.414 11:57:21 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:48.414 11:57:21 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:48.673 11:57:21 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:48.673 11:57:21 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:48.673 11:57:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.673 11:57:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 11:57:21 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:48.673 11:57:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:48.673 11:57:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 11:57:21 -- target/identify_passthru.sh@31 -- # nvmfpid=91225 00:29:48.673 11:57:21 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:48.673 11:57:21 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.673 11:57:21 -- target/identify_passthru.sh@35 -- # waitforlisten 91225 00:29:48.673 11:57:21 -- common/autotest_common.sh@829 -- # '[' -z 91225 ']' 00:29:48.673 11:57:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.673 11:57:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:48.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.673 11:57:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.673 11:57:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:48.673 11:57:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 [2024-11-20 11:57:21.668176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:48.673 [2024-11-20 11:57:21.668238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.933 [2024-11-20 11:57:21.794788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.933 [2024-11-20 11:57:21.887831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:48.933 [2024-11-20 11:57:21.887950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.933 [2024-11-20 11:57:21.887957] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.933 [2024-11-20 11:57:21.887962] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.933 [2024-11-20 11:57:21.888215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.933 [2024-11-20 11:57:21.888330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.933 [2024-11-20 11:57:21.888517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.933 [2024-11-20 11:57:21.888561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.502 11:57:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.502 11:57:22 -- common/autotest_common.sh@862 -- # return 0 00:29:49.502 11:57:22 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:49.502 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.502 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.502 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.502 11:57:22 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:49.502 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.502 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 [2024-11-20 11:57:22.604084] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:49.761 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.761 11:57:22 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.761 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 [2024-11-20 11:57:22.617379] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.761 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.761 11:57:22 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:49.761 11:57:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 11:57:22 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:49.761 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 Nvme0n1 00:29:49.761 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.761 11:57:22 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:49.761 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.761 11:57:22 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:49.761 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.761 11:57:22 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.761 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 [2024-11-20 11:57:22.780379] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.761 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.761 11:57:22 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:49.761 11:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.761 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.761 [2024-11-20 11:57:22.792197] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:49.761 [ 00:29:49.761 { 00:29:49.761 "allow_any_host": true, 00:29:49.761 "hosts": [], 00:29:49.761 "listen_addresses": [], 00:29:49.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:49.761 "subtype": "Discovery" 00:29:49.761 }, 00:29:49.761 { 00:29:49.761 "allow_any_host": true, 00:29:49.761 "hosts": [], 00:29:49.761 "listen_addresses": [ 00:29:49.761 { 00:29:49.761 "adrfam": "IPv4", 00:29:49.761 "traddr": "10.0.0.2", 00:29:49.761 "transport": "TCP", 00:29:49.761 "trsvcid": "4420", 00:29:49.761 "trtype": "TCP" 00:29:49.761 } 00:29:49.761 ], 00:29:49.761 "max_cntlid": 65519, 00:29:49.761 "max_namespaces": 1, 00:29:49.761 "min_cntlid": 1, 00:29:49.761 "model_number": "SPDK bdev Controller", 00:29:49.761 "namespaces": [ 00:29:49.761 { 00:29:49.761 "bdev_name": "Nvme0n1", 00:29:49.761 "name": "Nvme0n1", 00:29:50.021 "nguid": "211C7DF163F9466C88E6188A07FE2006", 00:29:50.021 "nsid": 1, 00:29:50.021 "uuid": "211c7df1-63f9-466c-88e6-188a07fe2006" 00:29:50.021 } 00:29:50.021 ], 00:29:50.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.021 "serial_number": "SPDK00000000000001", 00:29:50.021 "subtype": "NVMe" 00:29:50.021 } 00:29:50.021 ] 00:29:50.021 11:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.021 11:57:22 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:50.021 11:57:22 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:50.021 11:57:22 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:50.021 11:57:23 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:50.021 11:57:23 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:50.021 11:57:23 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:50.021 11:57:23 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:50.280 11:57:23 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:50.280 11:57:23 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:50.280 11:57:23 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:50.280 11:57:23 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.280 11:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.280 11:57:23 -- common/autotest_common.sh@10 -- # set +x 00:29:50.280 11:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.280 11:57:23 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:50.280 11:57:23 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:50.280 11:57:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:50.280 11:57:23 -- nvmf/common.sh@116 -- # sync 00:29:50.540 11:57:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:50.540 11:57:23 -- nvmf/common.sh@119 -- # set +e 00:29:50.540 11:57:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:50.540 11:57:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:50.540 rmmod nvme_tcp 00:29:50.540 rmmod nvme_fabrics 00:29:50.540 rmmod nvme_keyring 00:29:50.540 11:57:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:50.540 11:57:23 -- nvmf/common.sh@123 -- # set -e 00:29:50.540 11:57:23 -- nvmf/common.sh@124 -- # return 0 00:29:50.540 11:57:23 -- nvmf/common.sh@477 -- # '[' -n 91225 ']' 00:29:50.540 11:57:23 -- nvmf/common.sh@478 -- # killprocess 91225 00:29:50.540 11:57:23 -- common/autotest_common.sh@936 -- # '[' -z 91225 ']' 00:29:50.540 11:57:23 -- common/autotest_common.sh@940 -- # kill -0 91225 00:29:50.540 11:57:23 -- common/autotest_common.sh@941 -- # uname 00:29:50.540 11:57:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:50.540 11:57:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91225 00:29:50.540 11:57:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:50.540 11:57:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:50.540 killing process with pid 91225 00:29:50.540 11:57:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91225' 00:29:50.540 11:57:23 -- common/autotest_common.sh@955 -- # kill 91225 00:29:50.540 [2024-11-20 11:57:23.543547] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:50.540 11:57:23 -- common/autotest_common.sh@960 -- # wait 91225 00:29:50.799 11:57:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:50.799 11:57:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:50.799 11:57:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:50.799 11:57:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:50.799 11:57:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:50.799 11:57:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.799 11:57:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:50.799 11:57:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.799 11:57:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:50.799 ************************************ 00:29:50.799 END TEST nvmf_identify_passthru 00:29:50.799 ************************************ 00:29:50.799 00:29:50.799 real 0m3.247s 00:29:50.799 user 0m7.817s 00:29:50.799 sys 0m0.957s 00:29:50.799 11:57:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:50.799 11:57:23 -- common/autotest_common.sh@10 -- # set +x 00:29:51.059 11:57:23 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:51.059 11:57:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:51.060 11:57:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:51.060 11:57:23 -- common/autotest_common.sh@10 -- # set +x 00:29:51.060 ************************************ 00:29:51.060 START TEST nvmf_dif 00:29:51.060 ************************************ 00:29:51.060 11:57:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:51.060 * Looking for test storage... 00:29:51.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:51.060 11:57:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:51.060 11:57:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:51.060 11:57:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:51.060 11:57:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:51.060 11:57:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:51.060 11:57:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:51.060 11:57:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:51.060 11:57:24 -- scripts/common.sh@335 -- # IFS=.-: 00:29:51.060 11:57:24 -- scripts/common.sh@335 -- # read -ra ver1 00:29:51.060 11:57:24 -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.060 11:57:24 -- scripts/common.sh@336 -- # read -ra ver2 00:29:51.060 11:57:24 -- scripts/common.sh@337 -- # local 'op=<' 00:29:51.060 11:57:24 -- scripts/common.sh@339 -- # ver1_l=2 00:29:51.060 11:57:24 -- scripts/common.sh@340 -- # ver2_l=1 00:29:51.060 11:57:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:51.060 11:57:24 -- scripts/common.sh@343 -- # case "$op" in 00:29:51.060 11:57:24 -- scripts/common.sh@344 -- # : 1 00:29:51.060 11:57:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:51.060 11:57:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.060 11:57:24 -- scripts/common.sh@364 -- # decimal 1 00:29:51.060 11:57:24 -- scripts/common.sh@352 -- # local d=1 00:29:51.060 11:57:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.060 11:57:24 -- scripts/common.sh@354 -- # echo 1 00:29:51.060 11:57:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:51.060 11:57:24 -- scripts/common.sh@365 -- # decimal 2 00:29:51.060 11:57:24 -- scripts/common.sh@352 -- # local d=2 00:29:51.060 11:57:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.060 11:57:24 -- scripts/common.sh@354 -- # echo 2 00:29:51.060 11:57:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:51.060 11:57:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:51.060 11:57:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:51.060 11:57:24 -- scripts/common.sh@367 -- # return 0 00:29:51.060 11:57:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.060 11:57:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:51.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.060 --rc genhtml_branch_coverage=1 00:29:51.060 --rc genhtml_function_coverage=1 00:29:51.060 --rc genhtml_legend=1 00:29:51.060 --rc geninfo_all_blocks=1 00:29:51.060 --rc geninfo_unexecuted_blocks=1 00:29:51.060 00:29:51.060 ' 00:29:51.060 11:57:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:51.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.060 --rc genhtml_branch_coverage=1 00:29:51.060 --rc genhtml_function_coverage=1 00:29:51.060 --rc genhtml_legend=1 00:29:51.060 --rc geninfo_all_blocks=1 00:29:51.060 --rc geninfo_unexecuted_blocks=1 00:29:51.060 00:29:51.060 ' 00:29:51.060 11:57:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:51.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.060 --rc genhtml_branch_coverage=1 00:29:51.060 --rc genhtml_function_coverage=1 00:29:51.060 --rc genhtml_legend=1 00:29:51.060 --rc geninfo_all_blocks=1 00:29:51.060 --rc geninfo_unexecuted_blocks=1 00:29:51.060 00:29:51.060 ' 00:29:51.060 11:57:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:51.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.060 --rc genhtml_branch_coverage=1 00:29:51.060 --rc genhtml_function_coverage=1 00:29:51.060 --rc genhtml_legend=1 00:29:51.060 --rc geninfo_all_blocks=1 00:29:51.060 --rc geninfo_unexecuted_blocks=1 00:29:51.060 00:29:51.060 ' 00:29:51.060 11:57:24 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:51.060 11:57:24 -- nvmf/common.sh@7 -- # uname -s 00:29:51.060 11:57:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.060 11:57:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.060 11:57:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.060 11:57:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.060 11:57:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.060 11:57:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.060 11:57:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.060 11:57:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.060 11:57:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.060 11:57:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.060 11:57:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:29:51.060 11:57:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:29:51.060 11:57:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.060 11:57:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.060 11:57:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:51.060 11:57:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:51.320 11:57:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.320 11:57:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.320 11:57:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.320 11:57:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.320 11:57:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.320 11:57:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.320 11:57:24 -- paths/export.sh@5 -- # export PATH 00:29:51.320 11:57:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.320 11:57:24 -- nvmf/common.sh@46 -- # : 0 00:29:51.320 11:57:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:51.320 11:57:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:51.320 11:57:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:51.320 11:57:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.320 11:57:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.320 11:57:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:51.320 11:57:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:51.320 11:57:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:51.320 11:57:24 -- target/dif.sh@15 -- # NULL_META=16 00:29:51.320 11:57:24 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:51.320 11:57:24 -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:51.320 11:57:24 -- target/dif.sh@15 -- # NULL_DIF=1 00:29:51.320 11:57:24 -- target/dif.sh@135 -- # nvmftestinit 00:29:51.320 11:57:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:51.320 11:57:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.320 11:57:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:51.320 11:57:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:51.320 11:57:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:51.320 11:57:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.320 11:57:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:51.320 11:57:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.320 11:57:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:51.320 11:57:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:51.320 11:57:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:51.320 11:57:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:51.320 11:57:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:51.320 11:57:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:51.320 11:57:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.320 11:57:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.320 11:57:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:51.320 11:57:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:51.320 11:57:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:51.320 11:57:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:51.320 11:57:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:51.320 11:57:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.321 11:57:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:51.321 11:57:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:51.321 11:57:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:51.321 11:57:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:51.321 11:57:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:51.321 11:57:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:51.321 Cannot find device "nvmf_tgt_br" 00:29:51.321 11:57:24 -- nvmf/common.sh@154 -- # true 00:29:51.321 11:57:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:51.321 Cannot find device "nvmf_tgt_br2" 00:29:51.321 11:57:24 -- nvmf/common.sh@155 -- # true 00:29:51.321 11:57:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:51.321 11:57:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:51.321 Cannot find device "nvmf_tgt_br" 00:29:51.321 11:57:24 -- nvmf/common.sh@157 -- # true 00:29:51.321 11:57:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:51.321 Cannot find device "nvmf_tgt_br2" 00:29:51.321 11:57:24 -- nvmf/common.sh@158 -- # true 00:29:51.321 11:57:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:51.321 11:57:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:51.321 11:57:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:51.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:51.321 11:57:24 -- nvmf/common.sh@161 -- # true 00:29:51.321 11:57:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:51.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:51.321 11:57:24 -- nvmf/common.sh@162 -- # true 00:29:51.321 11:57:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:51.321 11:57:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:51.321 11:57:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:51.321 11:57:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:51.321 11:57:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:51.321 11:57:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:51.321 11:57:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:51.321 11:57:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:51.321 11:57:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:51.321 11:57:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:51.321 11:57:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:51.321 11:57:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:51.321 11:57:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:51.321 11:57:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:51.582 11:57:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:51.582 11:57:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:51.582 11:57:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:51.582 11:57:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:51.582 11:57:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:51.582 11:57:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:51.582 11:57:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:51.582 11:57:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:51.582 11:57:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:51.582 11:57:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:51.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:29:51.582 00:29:51.582 --- 10.0.0.2 ping statistics --- 00:29:51.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.582 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:29:51.582 11:57:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:51.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:51.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.025 ms 00:29:51.582 00:29:51.582 --- 10.0.0.3 ping statistics --- 00:29:51.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.582 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:29:51.582 11:57:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:51.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:29:51.582 00:29:51.582 --- 10.0.0.1 ping statistics --- 00:29:51.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.582 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:29:51.582 11:57:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.582 11:57:24 -- nvmf/common.sh@421 -- # return 0 00:29:51.582 11:57:24 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:29:51.582 11:57:24 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:51.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:51.842 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:51.842 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:52.102 11:57:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.102 11:57:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:52.102 11:57:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:52.102 11:57:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.102 11:57:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:52.102 11:57:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:52.102 11:57:24 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:52.102 11:57:24 -- target/dif.sh@137 -- # nvmfappstart 00:29:52.102 11:57:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:52.102 11:57:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.102 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:29:52.102 11:57:24 -- nvmf/common.sh@469 -- # nvmfpid=91592 00:29:52.102 11:57:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:52.102 11:57:24 -- nvmf/common.sh@470 -- # waitforlisten 91592 00:29:52.102 11:57:24 -- common/autotest_common.sh@829 -- # '[' -z 91592 ']' 00:29:52.102 11:57:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.102 11:57:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.102 11:57:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.102 11:57:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.102 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:29:52.102 [2024-11-20 11:57:25.003145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:52.102 [2024-11-20 11:57:25.003219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.102 [2024-11-20 11:57:25.127236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.362 [2024-11-20 11:57:25.208108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:52.362 [2024-11-20 11:57:25.208250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.362 [2024-11-20 11:57:25.208257] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.362 [2024-11-20 11:57:25.208262] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.362 [2024-11-20 11:57:25.208284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.932 11:57:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.932 11:57:25 -- common/autotest_common.sh@862 -- # return 0 00:29:52.932 11:57:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:52.932 11:57:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.932 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.932 11:57:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.932 11:57:25 -- target/dif.sh@139 -- # create_transport 00:29:52.932 11:57:25 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:52.932 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.932 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.932 [2024-11-20 11:57:25.888533] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.932 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.932 11:57:25 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:52.932 11:57:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:52.932 11:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:52.932 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.932 ************************************ 00:29:52.933 START TEST fio_dif_1_default 00:29:52.933 ************************************ 00:29:52.933 11:57:25 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:29:52.933 11:57:25 -- target/dif.sh@86 -- # create_subsystems 0 00:29:52.933 11:57:25 -- target/dif.sh@28 -- # local sub 00:29:52.933 11:57:25 -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.933 11:57:25 -- target/dif.sh@31 -- # create_subsystem 0 00:29:52.933 11:57:25 -- target/dif.sh@18 -- # local sub_id=0 00:29:52.933 11:57:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:52.933 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.933 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.933 bdev_null0 00:29:52.933 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.933 11:57:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:52.933 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.933 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.933 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.933 11:57:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:52.933 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.933 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.933 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.933 11:57:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.933 11:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.933 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.933 [2024-11-20 11:57:25.936525] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.933 11:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.933 11:57:25 -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:52.933 11:57:25 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:52.933 11:57:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:52.933 11:57:25 -- nvmf/common.sh@520 -- # config=() 00:29:52.933 11:57:25 -- nvmf/common.sh@520 -- # local subsystem config 00:29:52.933 11:57:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:52.933 11:57:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.933 11:57:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:52.933 { 00:29:52.933 "params": { 00:29:52.933 "name": "Nvme$subsystem", 00:29:52.933 "trtype": "$TEST_TRANSPORT", 00:29:52.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.933 "adrfam": "ipv4", 00:29:52.933 "trsvcid": "$NVMF_PORT", 00:29:52.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.933 "hdgst": ${hdgst:-false}, 00:29:52.933 "ddgst": ${ddgst:-false} 00:29:52.933 }, 00:29:52.933 "method": "bdev_nvme_attach_controller" 00:29:52.933 } 00:29:52.933 EOF 00:29:52.933 )") 00:29:52.933 11:57:25 -- target/dif.sh@82 -- # gen_fio_conf 00:29:52.933 11:57:25 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.933 11:57:25 -- target/dif.sh@54 -- # local file 00:29:52.933 11:57:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:52.933 11:57:25 -- target/dif.sh@56 -- # cat 00:29:52.933 11:57:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.933 11:57:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:52.933 11:57:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:52.933 11:57:25 -- common/autotest_common.sh@1330 -- # shift 00:29:52.933 11:57:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:52.933 11:57:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.933 11:57:25 -- nvmf/common.sh@542 -- # cat 00:29:52.933 11:57:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:52.933 11:57:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:52.933 11:57:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:52.933 11:57:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:52.933 11:57:25 -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.933 11:57:25 -- nvmf/common.sh@544 -- # jq . 00:29:52.933 11:57:25 -- nvmf/common.sh@545 -- # IFS=, 00:29:52.933 11:57:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:52.933 "params": { 00:29:52.933 "name": "Nvme0", 00:29:52.933 "trtype": "tcp", 00:29:52.933 "traddr": "10.0.0.2", 00:29:52.933 "adrfam": "ipv4", 00:29:52.933 "trsvcid": "4420", 00:29:52.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.933 "hdgst": false, 00:29:52.933 "ddgst": false 00:29:52.933 }, 00:29:52.933 "method": "bdev_nvme_attach_controller" 00:29:52.933 }' 00:29:53.193 11:57:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:29:53.193 11:57:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:29:53.193 11:57:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.193 11:57:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:53.193 11:57:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:29:53.193 11:57:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:53.193 11:57:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:29:53.193 11:57:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:29:53.193 11:57:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:53.193 11:57:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.193 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:53.193 fio-3.35 00:29:53.193 Starting 1 thread 00:29:53.762 [2024-11-20 11:57:26.536226] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:53.762 [2024-11-20 11:57:26.536284] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:03.790 00:30:03.790 filename0: (groupid=0, jobs=1): err= 0: pid=91672: Wed Nov 20 11:57:36 2024 00:30:03.790 read: IOPS=369, BW=1478KiB/s (1514kB/s)(14.4MiB/10002msec) 00:30:03.790 slat (nsec): min=5487, max=59419, avg=6555.11, stdev=3223.64 00:30:03.790 clat (usec): min=296, max=41971, avg=10805.00, stdev=17730.08 00:30:03.790 lat (usec): min=302, max=41978, avg=10811.56, stdev=17729.81 00:30:03.790 clat percentiles (usec): 00:30:03.790 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 310], 20.00th=[ 318], 00:30:03.790 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:30:03.790 | 70.00th=[ 355], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:30:03.790 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:03.790 | 99.99th=[42206] 00:30:03.791 bw ( KiB/s): min= 1120, max= 1984, per=100.00%, avg=1493.89, stdev=230.77, samples=19 00:30:03.791 iops : min= 280, max= 496, avg=373.47, stdev=57.69, samples=19 00:30:03.791 lat (usec) : 500=72.46%, 750=1.46% 00:30:03.791 lat (msec) : 2=0.22%, 50=25.87% 00:30:03.791 cpu : usr=93.33%, sys=6.20%, ctx=21, majf=0, minf=9 00:30:03.791 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.791 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.791 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:03.791 00:30:03.791 Run status group 0 (all jobs): 00:30:03.791 READ: bw=1478KiB/s (1514kB/s), 1478KiB/s-1478KiB/s (1514kB/s-1514kB/s), io=14.4MiB (15.1MB), run=10002-10002msec 00:30:04.051 11:57:36 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:04.051 11:57:36 -- target/dif.sh@43 -- # local sub 00:30:04.051 11:57:36 -- target/dif.sh@45 -- # for sub in "$@" 00:30:04.051 11:57:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:04.051 11:57:36 -- target/dif.sh@36 -- # local sub_id=0 00:30:04.051 11:57:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 00:30:04.051 real 0m10.957s 00:30:04.051 user 0m9.960s 00:30:04.051 sys 0m0.894s 00:30:04.051 11:57:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 ************************************ 00:30:04.051 END TEST fio_dif_1_default 00:30:04.051 ************************************ 00:30:04.051 11:57:36 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:04.051 11:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:04.051 11:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 ************************************ 00:30:04.051 START TEST fio_dif_1_multi_subsystems 00:30:04.051 ************************************ 00:30:04.051 11:57:36 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:30:04.051 11:57:36 -- target/dif.sh@92 -- # local files=1 00:30:04.051 11:57:36 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:04.051 11:57:36 -- target/dif.sh@28 -- # local sub 00:30:04.051 11:57:36 -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.051 11:57:36 -- target/dif.sh@31 -- # create_subsystem 0 00:30:04.051 11:57:36 -- target/dif.sh@18 -- # local sub_id=0 00:30:04.051 11:57:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 bdev_null0 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 [2024-11-20 11:57:36.972854] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:36 -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.051 11:57:36 -- target/dif.sh@31 -- # create_subsystem 1 00:30:04.051 11:57:36 -- target/dif.sh@18 -- # local sub_id=1 00:30:04.051 11:57:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 bdev_null1 00:30:04.051 11:57:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:04.051 11:57:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:04.051 11:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.051 11:57:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.051 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:30:04.051 11:57:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.051 11:57:37 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:04.051 11:57:37 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:04.051 11:57:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:04.051 11:57:37 -- nvmf/common.sh@520 -- # config=() 00:30:04.051 11:57:37 -- nvmf/common.sh@520 -- # local subsystem config 00:30:04.051 11:57:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:04.051 11:57:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.051 11:57:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:04.051 { 00:30:04.051 "params": { 00:30:04.051 "name": "Nvme$subsystem", 00:30:04.051 "trtype": "$TEST_TRANSPORT", 00:30:04.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.051 "adrfam": "ipv4", 00:30:04.051 "trsvcid": "$NVMF_PORT", 00:30:04.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.051 "hdgst": ${hdgst:-false}, 00:30:04.051 "ddgst": ${ddgst:-false} 00:30:04.051 }, 00:30:04.051 "method": "bdev_nvme_attach_controller" 00:30:04.051 } 00:30:04.051 EOF 00:30:04.051 )") 00:30:04.051 11:57:37 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.051 11:57:37 -- target/dif.sh@82 -- # gen_fio_conf 00:30:04.051 11:57:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:04.051 11:57:37 -- target/dif.sh@54 -- # local file 00:30:04.051 11:57:37 -- target/dif.sh@56 -- # cat 00:30:04.052 11:57:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.052 11:57:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:04.052 11:57:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:04.052 11:57:37 -- common/autotest_common.sh@1330 -- # shift 00:30:04.052 11:57:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:04.052 11:57:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.052 11:57:37 -- nvmf/common.sh@542 -- # cat 00:30:04.052 11:57:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:04.052 11:57:37 -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.052 11:57:37 -- target/dif.sh@73 -- # cat 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:04.052 11:57:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:04.052 11:57:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:04.052 { 00:30:04.052 "params": { 00:30:04.052 "name": "Nvme$subsystem", 00:30:04.052 "trtype": "$TEST_TRANSPORT", 00:30:04.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.052 "adrfam": "ipv4", 00:30:04.052 "trsvcid": "$NVMF_PORT", 00:30:04.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.052 "hdgst": ${hdgst:-false}, 00:30:04.052 "ddgst": ${ddgst:-false} 00:30:04.052 }, 00:30:04.052 "method": "bdev_nvme_attach_controller" 00:30:04.052 } 00:30:04.052 EOF 00:30:04.052 )") 00:30:04.052 11:57:37 -- target/dif.sh@72 -- # (( file++ )) 00:30:04.052 11:57:37 -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.052 11:57:37 -- nvmf/common.sh@542 -- # cat 00:30:04.052 11:57:37 -- nvmf/common.sh@544 -- # jq . 00:30:04.052 11:57:37 -- nvmf/common.sh@545 -- # IFS=, 00:30:04.052 11:57:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:04.052 "params": { 00:30:04.052 "name": "Nvme0", 00:30:04.052 "trtype": "tcp", 00:30:04.052 "traddr": "10.0.0.2", 00:30:04.052 "adrfam": "ipv4", 00:30:04.052 "trsvcid": "4420", 00:30:04.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.052 "hdgst": false, 00:30:04.052 "ddgst": false 00:30:04.052 }, 00:30:04.052 "method": "bdev_nvme_attach_controller" 00:30:04.052 },{ 00:30:04.052 "params": { 00:30:04.052 "name": "Nvme1", 00:30:04.052 "trtype": "tcp", 00:30:04.052 "traddr": "10.0.0.2", 00:30:04.052 "adrfam": "ipv4", 00:30:04.052 "trsvcid": "4420", 00:30:04.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.052 "hdgst": false, 00:30:04.052 "ddgst": false 00:30:04.052 }, 00:30:04.052 "method": "bdev_nvme_attach_controller" 00:30:04.052 }' 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:04.052 11:57:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:04.052 11:57:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:04.052 11:57:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:04.312 11:57:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:04.312 11:57:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:04.312 11:57:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:04.312 11:57:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.312 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:04.312 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:04.312 fio-3.35 00:30:04.312 Starting 2 threads 00:30:04.883 [2024-11-20 11:57:37.778908] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:04.883 [2024-11-20 11:57:37.779313] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:17.124 00:30:17.124 filename0: (groupid=0, jobs=1): err= 0: pid=91833: Wed Nov 20 11:57:47 2024 00:30:17.124 read: IOPS=1027, BW=4109KiB/s (4207kB/s)(40.2MiB/10008msec) 00:30:17.124 slat (nsec): min=5554, max=65533, avg=7961.18, stdev=5624.63 00:30:17.124 clat (usec): min=309, max=42611, avg=3870.77, stdev=11360.60 00:30:17.124 lat (usec): min=316, max=42650, avg=3878.73, stdev=11360.53 00:30:17.124 clat percentiles (usec): 00:30:17.124 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:30:17.124 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:30:17.124 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 644], 95.00th=[40633], 00:30:17.124 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:17.124 | 99.99th=[42730] 00:30:17.124 bw ( KiB/s): min= 2432, max= 5568, per=77.80%, avg=4119.58, stdev=897.39, samples=19 00:30:17.124 iops : min= 608, max= 1392, avg=1029.89, stdev=224.35, samples=19 00:30:17.124 lat (usec) : 500=83.35%, 750=7.77%, 1000=0.28% 00:30:17.124 lat (msec) : 50=8.60% 00:30:17.124 cpu : usr=98.66%, sys=0.82%, ctx=13, majf=0, minf=0 00:30:17.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.124 issued rwts: total=10280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:17.124 filename1: (groupid=0, jobs=1): err= 0: pid=91834: Wed Nov 20 11:57:47 2024 00:30:17.124 read: IOPS=298, BW=1195KiB/s (1223kB/s)(11.7MiB/10030msec) 00:30:17.124 slat (nsec): min=5185, max=75987, avg=9348.21, stdev=6937.83 00:30:17.124 clat (usec): min=317, max=41974, avg=13361.49, stdev=18857.46 00:30:17.124 lat (usec): min=323, max=41980, avg=13370.84, stdev=18857.11 00:30:17.124 clat percentiles (usec): 00:30:17.124 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 355], 00:30:17.124 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 603], 60.00th=[ 627], 00:30:17.124 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:17.124 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:17.124 | 99.99th=[42206] 00:30:17.124 bw ( KiB/s): min= 800, max= 2944, per=22.59%, avg=1196.80, stdev=471.03, samples=20 00:30:17.124 iops : min= 200, max= 736, avg=299.20, stdev=117.76, samples=20 00:30:17.124 lat (usec) : 500=43.39%, 750=23.77%, 1000=0.80% 00:30:17.124 lat (msec) : 2=0.13%, 50=31.91% 00:30:17.124 cpu : usr=96.99%, sys=2.56%, ctx=11, majf=0, minf=0 00:30:17.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.124 issued rwts: total=2996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:17.124 00:30:17.124 Run status group 0 (all jobs): 00:30:17.124 READ: bw=5295KiB/s (5422kB/s), 1195KiB/s-4109KiB/s (1223kB/s-4207kB/s), io=51.9MiB (54.4MB), run=10008-10030msec 00:30:17.124 11:57:48 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:17.124 11:57:48 -- target/dif.sh@43 -- # local sub 00:30:17.124 11:57:48 -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.124 11:57:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:17.124 11:57:48 -- target/dif.sh@36 -- # local sub_id=0 00:30:17.125 11:57:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.125 11:57:48 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:17.125 11:57:48 -- target/dif.sh@36 -- # local sub_id=1 00:30:17.125 11:57:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 00:30:17.125 real 0m11.239s 00:30:17.125 user 0m20.473s 00:30:17.125 sys 0m0.645s 00:30:17.125 11:57:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 ************************************ 00:30:17.125 END TEST fio_dif_1_multi_subsystems 00:30:17.125 ************************************ 00:30:17.125 11:57:48 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:17.125 11:57:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:17.125 11:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 ************************************ 00:30:17.125 START TEST fio_dif_rand_params 00:30:17.125 ************************************ 00:30:17.125 11:57:48 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:30:17.125 11:57:48 -- target/dif.sh@100 -- # local NULL_DIF 00:30:17.125 11:57:48 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:17.125 11:57:48 -- target/dif.sh@103 -- # NULL_DIF=3 00:30:17.125 11:57:48 -- target/dif.sh@103 -- # bs=128k 00:30:17.125 11:57:48 -- target/dif.sh@103 -- # numjobs=3 00:30:17.125 11:57:48 -- target/dif.sh@103 -- # iodepth=3 00:30:17.125 11:57:48 -- target/dif.sh@103 -- # runtime=5 00:30:17.125 11:57:48 -- target/dif.sh@105 -- # create_subsystems 0 00:30:17.125 11:57:48 -- target/dif.sh@28 -- # local sub 00:30:17.125 11:57:48 -- target/dif.sh@30 -- # for sub in "$@" 00:30:17.125 11:57:48 -- target/dif.sh@31 -- # create_subsystem 0 00:30:17.125 11:57:48 -- target/dif.sh@18 -- # local sub_id=0 00:30:17.125 11:57:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 bdev_null0 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.125 11:57:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.125 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 [2024-11-20 11:57:48.266487] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.125 11:57:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.125 11:57:48 -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:17.125 11:57:48 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:17.125 11:57:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:17.125 11:57:48 -- nvmf/common.sh@520 -- # config=() 00:30:17.125 11:57:48 -- nvmf/common.sh@520 -- # local subsystem config 00:30:17.125 11:57:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:17.125 11:57:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.125 11:57:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:17.125 { 00:30:17.125 "params": { 00:30:17.125 "name": "Nvme$subsystem", 00:30:17.125 "trtype": "$TEST_TRANSPORT", 00:30:17.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.125 "adrfam": "ipv4", 00:30:17.125 "trsvcid": "$NVMF_PORT", 00:30:17.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.125 "hdgst": ${hdgst:-false}, 00:30:17.125 "ddgst": ${ddgst:-false} 00:30:17.125 }, 00:30:17.125 "method": "bdev_nvme_attach_controller" 00:30:17.125 } 00:30:17.125 EOF 00:30:17.125 )") 00:30:17.125 11:57:48 -- target/dif.sh@82 -- # gen_fio_conf 00:30:17.125 11:57:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.125 11:57:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:17.125 11:57:48 -- target/dif.sh@54 -- # local file 00:30:17.125 11:57:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.125 11:57:48 -- target/dif.sh@56 -- # cat 00:30:17.125 11:57:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:17.125 11:57:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.125 11:57:48 -- common/autotest_common.sh@1330 -- # shift 00:30:17.125 11:57:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:17.125 11:57:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.125 11:57:48 -- nvmf/common.sh@542 -- # cat 00:30:17.125 11:57:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.125 11:57:48 -- target/dif.sh@72 -- # (( file <= files )) 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:17.125 11:57:48 -- nvmf/common.sh@544 -- # jq . 00:30:17.125 11:57:48 -- nvmf/common.sh@545 -- # IFS=, 00:30:17.125 11:57:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:17.125 "params": { 00:30:17.125 "name": "Nvme0", 00:30:17.125 "trtype": "tcp", 00:30:17.125 "traddr": "10.0.0.2", 00:30:17.125 "adrfam": "ipv4", 00:30:17.125 "trsvcid": "4420", 00:30:17.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.125 "hdgst": false, 00:30:17.125 "ddgst": false 00:30:17.125 }, 00:30:17.125 "method": "bdev_nvme_attach_controller" 00:30:17.125 }' 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:17.125 11:57:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:17.125 11:57:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:17.125 11:57:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:17.125 11:57:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:17.125 11:57:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:17.125 11:57:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.125 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:17.125 ... 00:30:17.125 fio-3.35 00:30:17.125 Starting 3 threads 00:30:17.125 [2024-11-20 11:57:48.879510] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:17.125 [2024-11-20 11:57:48.879913] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:21.324 00:30:21.324 filename0: (groupid=0, jobs=1): err= 0: pid=92003: Wed Nov 20 11:57:54 2024 00:30:21.324 read: IOPS=381, BW=47.6MiB/s (49.9MB/s)(238MiB/5002msec) 00:30:21.324 slat (nsec): min=5673, max=59454, avg=11537.67, stdev=7973.18 00:30:21.324 clat (usec): min=1807, max=52037, avg=7847.37, stdev=4374.39 00:30:21.324 lat (usec): min=1817, max=52044, avg=7858.91, stdev=4375.71 00:30:21.324 clat percentiles (usec): 00:30:21.324 | 1.00th=[ 3064], 5.00th=[ 3097], 10.00th=[ 3163], 20.00th=[ 6194], 00:30:21.324 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7701], 00:30:21.324 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:30:21.324 | 99.00th=[12125], 99.50th=[47973], 99.90th=[51643], 99.95th=[52167], 00:30:21.324 | 99.99th=[52167] 00:30:21.324 bw ( KiB/s): min=40704, max=56832, per=38.91%, avg=48839.11, stdev=5759.68, samples=9 00:30:21.324 iops : min= 318, max= 444, avg=381.56, stdev=45.00, samples=9 00:30:21.324 lat (msec) : 2=0.05%, 4=11.65%, 10=64.27%, 20=23.24%, 50=0.47% 00:30:21.324 lat (msec) : 100=0.31% 00:30:21.324 cpu : usr=96.12%, sys=2.72%, ctx=4, majf=0, minf=0 00:30:21.324 IO depths : 1=24.9%, 2=75.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.324 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.324 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:21.324 filename0: (groupid=0, jobs=1): err= 0: pid=92004: Wed Nov 20 11:57:54 2024 00:30:21.324 read: IOPS=334, BW=41.8MiB/s (43.8MB/s)(210MiB/5033msec) 00:30:21.324 slat (nsec): min=5853, max=53923, avg=12525.39, stdev=6348.89 00:30:21.324 clat (usec): min=2545, max=50566, avg=8959.36, stdev=8492.12 00:30:21.324 lat (usec): min=2554, max=50590, avg=8971.88, stdev=8492.78 00:30:21.324 clat percentiles (usec): 00:30:21.324 | 1.00th=[ 3064], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5407], 00:30:21.324 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6915], 60.00th=[ 8717], 00:30:21.324 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10814], 00:30:21.324 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:30:21.324 | 99.99th=[50594] 00:30:21.324 bw ( KiB/s): min=35328, max=48384, per=33.81%, avg=42439.11, stdev=4119.49, samples=9 00:30:21.324 iops : min= 276, max= 378, avg=331.56, stdev=32.18, samples=9 00:30:21.324 lat (msec) : 4=2.08%, 10=86.56%, 20=7.07%, 50=3.45%, 100=0.83% 00:30:21.324 cpu : usr=95.89%, sys=2.74%, ctx=6, majf=0, minf=0 00:30:21.324 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.324 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.324 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:21.324 filename0: (groupid=0, jobs=1): err= 0: pid=92005: Wed Nov 20 11:57:54 2024 00:30:21.324 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5003msec) 00:30:21.324 slat (nsec): min=5429, max=49647, avg=15281.05, stdev=8448.98 00:30:21.324 clat (usec): min=2897, max=50354, avg=11108.69, stdev=12104.85 00:30:21.324 lat (usec): min=2904, max=50374, avg=11123.97, stdev=12104.97 00:30:21.324 clat percentiles (usec): 00:30:21.324 | 1.00th=[ 3720], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6063], 00:30:21.324 | 30.00th=[ 6587], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7963], 00:30:21.324 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 9372], 95.00th=[47973], 00:30:21.324 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:30:21.324 | 99.99th=[50594] 00:30:21.324 bw ( KiB/s): min=21504, max=48896, per=27.94%, avg=35072.00, stdev=9150.88, samples=9 00:30:21.324 iops : min= 168, max= 382, avg=274.00, stdev=71.49, samples=9 00:30:21.324 lat (msec) : 4=1.11%, 10=89.32%, 50=9.35%, 100=0.22% 00:30:21.324 cpu : usr=94.98%, sys=3.80%, ctx=7, majf=0, minf=0 00:30:21.324 IO depths : 1=6.4%, 2=93.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.324 issued rwts: total=1348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.324 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:21.324 00:30:21.324 Run status group 0 (all jobs): 00:30:21.324 READ: bw=123MiB/s (129MB/s), 33.7MiB/s-47.6MiB/s (35.3MB/s-49.9MB/s), io=617MiB (647MB), run=5002-5033msec 00:30:21.324 11:57:54 -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:21.324 11:57:54 -- target/dif.sh@43 -- # local sub 00:30:21.324 11:57:54 -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.324 11:57:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:21.324 11:57:54 -- target/dif.sh@36 -- # local sub_id=0 00:30:21.324 11:57:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.324 11:57:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.324 11:57:54 -- target/dif.sh@109 -- # NULL_DIF=2 00:30:21.324 11:57:54 -- target/dif.sh@109 -- # bs=4k 00:30:21.324 11:57:54 -- target/dif.sh@109 -- # numjobs=8 00:30:21.324 11:57:54 -- target/dif.sh@109 -- # iodepth=16 00:30:21.324 11:57:54 -- target/dif.sh@109 -- # runtime= 00:30:21.324 11:57:54 -- target/dif.sh@109 -- # files=2 00:30:21.324 11:57:54 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:21.324 11:57:54 -- target/dif.sh@28 -- # local sub 00:30:21.324 11:57:54 -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.324 11:57:54 -- target/dif.sh@31 -- # create_subsystem 0 00:30:21.324 11:57:54 -- target/dif.sh@18 -- # local sub_id=0 00:30:21.324 11:57:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 bdev_null0 00:30:21.324 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.324 11:57:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.324 11:57:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.324 11:57:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 [2024-11-20 11:57:54.301056] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.324 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.324 11:57:54 -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.324 11:57:54 -- target/dif.sh@31 -- # create_subsystem 1 00:30:21.324 11:57:54 -- target/dif.sh@18 -- # local sub_id=1 00:30:21.324 11:57:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:21.324 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.324 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.324 bdev_null1 00:30:21.325 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.325 11:57:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:21.325 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.325 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.325 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.325 11:57:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:21.325 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.325 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.325 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.325 11:57:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.325 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.325 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.325 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.325 11:57:54 -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.325 11:57:54 -- target/dif.sh@31 -- # create_subsystem 2 00:30:21.325 11:57:54 -- target/dif.sh@18 -- # local sub_id=2 00:30:21.325 11:57:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:21.325 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.325 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.584 bdev_null2 00:30:21.584 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.584 11:57:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:21.584 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.584 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.584 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.584 11:57:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:21.584 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.584 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.584 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.584 11:57:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:21.584 11:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.584 11:57:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.584 11:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.584 11:57:54 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:21.584 11:57:54 -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:21.584 11:57:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:21.584 11:57:54 -- nvmf/common.sh@520 -- # config=() 00:30:21.584 11:57:54 -- nvmf/common.sh@520 -- # local subsystem config 00:30:21.584 11:57:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:21.584 11:57:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:21.584 { 00:30:21.584 "params": { 00:30:21.584 "name": "Nvme$subsystem", 00:30:21.584 "trtype": "$TEST_TRANSPORT", 00:30:21.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.584 "adrfam": "ipv4", 00:30:21.584 "trsvcid": "$NVMF_PORT", 00:30:21.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.584 "hdgst": ${hdgst:-false}, 00:30:21.584 "ddgst": ${ddgst:-false} 00:30:21.584 }, 00:30:21.584 "method": "bdev_nvme_attach_controller" 00:30:21.584 } 00:30:21.584 EOF 00:30:21.584 )") 00:30:21.585 11:57:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.585 11:57:54 -- target/dif.sh@82 -- # gen_fio_conf 00:30:21.585 11:57:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.585 11:57:54 -- target/dif.sh@54 -- # local file 00:30:21.585 11:57:54 -- nvmf/common.sh@542 -- # cat 00:30:21.585 11:57:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:21.585 11:57:54 -- target/dif.sh@56 -- # cat 00:30:21.585 11:57:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.585 11:57:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:21.585 11:57:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.585 11:57:54 -- common/autotest_common.sh@1330 -- # shift 00:30:21.585 11:57:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:21.585 11:57:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.585 11:57:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:21.585 11:57:54 -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.585 11:57:54 -- target/dif.sh@73 -- # cat 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:21.585 11:57:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:21.585 11:57:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:21.585 { 00:30:21.585 "params": { 00:30:21.585 "name": "Nvme$subsystem", 00:30:21.585 "trtype": "$TEST_TRANSPORT", 00:30:21.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.585 "adrfam": "ipv4", 00:30:21.585 "trsvcid": "$NVMF_PORT", 00:30:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.585 "hdgst": ${hdgst:-false}, 00:30:21.585 "ddgst": ${ddgst:-false} 00:30:21.585 }, 00:30:21.585 "method": "bdev_nvme_attach_controller" 00:30:21.585 } 00:30:21.585 EOF 00:30:21.585 )") 00:30:21.585 11:57:54 -- nvmf/common.sh@542 -- # cat 00:30:21.585 11:57:54 -- target/dif.sh@72 -- # (( file++ )) 00:30:21.585 11:57:54 -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.585 11:57:54 -- target/dif.sh@73 -- # cat 00:30:21.585 11:57:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:21.585 11:57:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:21.585 { 00:30:21.585 "params": { 00:30:21.585 "name": "Nvme$subsystem", 00:30:21.585 "trtype": "$TEST_TRANSPORT", 00:30:21.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.585 "adrfam": "ipv4", 00:30:21.585 "trsvcid": "$NVMF_PORT", 00:30:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.585 "hdgst": ${hdgst:-false}, 00:30:21.585 "ddgst": ${ddgst:-false} 00:30:21.585 }, 00:30:21.585 "method": "bdev_nvme_attach_controller" 00:30:21.585 } 00:30:21.585 EOF 00:30:21.585 )") 00:30:21.585 11:57:54 -- target/dif.sh@72 -- # (( file++ )) 00:30:21.585 11:57:54 -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.585 11:57:54 -- nvmf/common.sh@542 -- # cat 00:30:21.585 11:57:54 -- nvmf/common.sh@544 -- # jq . 00:30:21.585 11:57:54 -- nvmf/common.sh@545 -- # IFS=, 00:30:21.585 11:57:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:21.585 "params": { 00:30:21.585 "name": "Nvme0", 00:30:21.585 "trtype": "tcp", 00:30:21.585 "traddr": "10.0.0.2", 00:30:21.585 "adrfam": "ipv4", 00:30:21.585 "trsvcid": "4420", 00:30:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.585 "hdgst": false, 00:30:21.585 "ddgst": false 00:30:21.585 }, 00:30:21.585 "method": "bdev_nvme_attach_controller" 00:30:21.585 },{ 00:30:21.585 "params": { 00:30:21.585 "name": "Nvme1", 00:30:21.585 "trtype": "tcp", 00:30:21.585 "traddr": "10.0.0.2", 00:30:21.585 "adrfam": "ipv4", 00:30:21.585 "trsvcid": "4420", 00:30:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:21.585 "hdgst": false, 00:30:21.585 "ddgst": false 00:30:21.585 }, 00:30:21.585 "method": "bdev_nvme_attach_controller" 00:30:21.585 },{ 00:30:21.585 "params": { 00:30:21.585 "name": "Nvme2", 00:30:21.585 "trtype": "tcp", 00:30:21.585 "traddr": "10.0.0.2", 00:30:21.585 "adrfam": "ipv4", 00:30:21.585 "trsvcid": "4420", 00:30:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:21.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:21.585 "hdgst": false, 00:30:21.585 "ddgst": false 00:30:21.585 }, 00:30:21.585 "method": "bdev_nvme_attach_controller" 00:30:21.585 }' 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:21.585 11:57:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:21.585 11:57:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:21.585 11:57:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:21.585 11:57:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:21.585 11:57:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:21.585 11:57:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.845 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:21.845 ... 00:30:21.845 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:21.845 ... 00:30:21.845 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:21.845 ... 00:30:21.845 fio-3.35 00:30:21.845 Starting 24 threads 00:30:22.414 [2024-11-20 11:57:55.296978] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:22.414 [2024-11-20 11:57:55.297028] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:34.647 00:30:34.647 filename0: (groupid=0, jobs=1): err= 0: pid=92115: Wed Nov 20 11:58:05 2024 00:30:34.647 read: IOPS=320, BW=1282KiB/s (1313kB/s)(12.5MiB/10005msec) 00:30:34.647 slat (usec): min=2, max=4032, avg=13.77, stdev=85.47 00:30:34.647 clat (msec): min=4, max=113, avg=49.82, stdev=15.47 00:30:34.647 lat (msec): min=7, max=113, avg=49.84, stdev=15.47 00:30:34.647 clat percentiles (msec): 00:30:34.647 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 37], 00:30:34.647 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 49], 00:30:34.647 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 82], 00:30:34.647 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 114], 99.95th=[ 114], 00:30:34.647 | 99.99th=[ 114] 00:30:34.647 bw ( KiB/s): min= 784, max= 1560, per=4.08%, avg=1277.89, stdev=174.70, samples=19 00:30:34.647 iops : min= 196, max= 390, avg=319.47, stdev=43.67, samples=19 00:30:34.647 lat (msec) : 10=0.06%, 20=0.31%, 50=63.27%, 100=36.01%, 250=0.34% 00:30:34.647 cpu : usr=33.34%, sys=0.28%, ctx=909, majf=0, minf=9 00:30:34.647 IO depths : 1=1.1%, 2=2.9%, 4=10.9%, 8=72.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:34.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.647 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.647 issued rwts: total=3207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.647 filename0: (groupid=0, jobs=1): err= 0: pid=92116: Wed Nov 20 11:58:05 2024 00:30:34.647 read: IOPS=384, BW=1537KiB/s (1574kB/s)(15.1MiB/10035msec) 00:30:34.647 slat (usec): min=5, max=8026, avg=14.23, stdev=144.56 00:30:34.647 clat (usec): min=1124, max=107090, avg=41499.70, stdev=16023.14 00:30:34.647 lat (usec): min=1134, max=107112, avg=41513.93, stdev=16023.59 00:30:34.647 clat percentiles (msec): 00:30:34.647 | 1.00th=[ 3], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 31], 00:30:34.647 | 30.00th=[ 33], 40.00th=[ 37], 50.00th=[ 41], 60.00th=[ 45], 00:30:34.647 | 70.00th=[ 48], 80.00th=[ 52], 90.00th=[ 59], 95.00th=[ 71], 00:30:34.647 | 99.00th=[ 95], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:30:34.647 | 99.99th=[ 108] 00:30:34.647 bw ( KiB/s): min= 816, max= 2560, per=4.91%, avg=1538.00, stdev=349.41, samples=20 00:30:34.647 iops : min= 204, max= 640, avg=384.50, stdev=87.35, samples=20 00:30:34.647 lat (msec) : 2=0.42%, 4=2.08%, 10=1.25%, 20=0.57%, 50=74.50% 00:30:34.647 lat (msec) : 100=20.36%, 250=0.83% 00:30:34.647 cpu : usr=42.72%, sys=0.48%, ctx=1136, majf=0, minf=0 00:30:34.647 IO depths : 1=1.2%, 2=2.7%, 4=10.4%, 8=73.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:34.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.647 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.647 issued rwts: total=3855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.647 filename0: (groupid=0, jobs=1): err= 0: pid=92117: Wed Nov 20 11:58:05 2024 00:30:34.647 read: IOPS=330, BW=1323KiB/s (1355kB/s)(12.9MiB/10001msec) 00:30:34.647 slat (usec): min=5, max=4032, avg=14.03, stdev=99.16 00:30:34.647 clat (msec): min=17, max=115, avg=48.27, stdev=14.78 00:30:34.647 lat (msec): min=17, max=115, avg=48.28, stdev=14.78 00:30:34.647 clat percentiles (msec): 00:30:34.647 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 36], 00:30:34.647 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 49], 00:30:34.647 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 67], 95.00th=[ 77], 00:30:34.647 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 116], 99.95th=[ 116], 00:30:34.647 | 99.99th=[ 116] 00:30:34.647 bw ( KiB/s): min= 888, max= 1680, per=4.21%, avg=1318.16, stdev=199.45, samples=19 00:30:34.648 iops : min= 222, max= 420, avg=329.53, stdev=49.85, samples=19 00:30:34.648 lat (msec) : 20=0.70%, 50=62.33%, 100=36.79%, 250=0.18% 00:30:34.648 cpu : usr=38.86%, sys=0.36%, ctx=1152, majf=0, minf=9 00:30:34.648 IO depths : 1=1.8%, 2=4.3%, 4=13.5%, 8=69.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename0: (groupid=0, jobs=1): err= 0: pid=92118: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=340, BW=1360KiB/s (1393kB/s)(13.3MiB/10020msec) 00:30:34.648 slat (usec): min=4, max=13430, avg=18.10, stdev=276.51 00:30:34.648 clat (msec): min=10, max=111, avg=46.93, stdev=14.21 00:30:34.648 lat (msec): min=10, max=111, avg=46.95, stdev=14.22 00:30:34.648 clat percentiles (msec): 00:30:34.648 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 36], 00:30:34.648 | 30.00th=[ 38], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 48], 00:30:34.648 | 70.00th=[ 53], 80.00th=[ 59], 90.00th=[ 64], 95.00th=[ 72], 00:30:34.648 | 99.00th=[ 89], 99.50th=[ 96], 99.90th=[ 112], 99.95th=[ 112], 00:30:34.648 | 99.99th=[ 112] 00:30:34.648 bw ( KiB/s): min= 1040, max= 2068, per=4.33%, avg=1356.45, stdev=206.25, samples=20 00:30:34.648 iops : min= 260, max= 517, avg=339.10, stdev=51.56, samples=20 00:30:34.648 lat (msec) : 20=0.94%, 50=63.94%, 100=34.98%, 250=0.15% 00:30:34.648 cpu : usr=36.52%, sys=0.32%, ctx=1049, majf=0, minf=9 00:30:34.648 IO depths : 1=0.7%, 2=2.1%, 4=10.1%, 8=74.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename0: (groupid=0, jobs=1): err= 0: pid=92119: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=324, BW=1299KiB/s (1331kB/s)(12.7MiB/10045msec) 00:30:34.648 slat (usec): min=4, max=8033, avg=15.88, stdev=198.56 00:30:34.648 clat (msec): min=13, max=120, avg=49.13, stdev=15.93 00:30:34.648 lat (msec): min=13, max=120, avg=49.14, stdev=15.92 00:30:34.648 clat percentiles (msec): 00:30:34.648 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 36], 00:30:34.648 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.648 | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 81], 00:30:34.648 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 121], 99.95th=[ 121], 00:30:34.648 | 99.99th=[ 121] 00:30:34.648 bw ( KiB/s): min= 816, max= 1584, per=4.14%, avg=1298.65, stdev=191.45, samples=20 00:30:34.648 iops : min= 204, max= 396, avg=324.65, stdev=47.87, samples=20 00:30:34.648 lat (msec) : 20=0.98%, 50=61.69%, 100=36.65%, 250=0.67% 00:30:34.648 cpu : usr=32.24%, sys=0.32%, ctx=998, majf=0, minf=9 00:30:34.648 IO depths : 1=1.2%, 2=2.5%, 4=9.8%, 8=74.0%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename0: (groupid=0, jobs=1): err= 0: pid=92120: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=346, BW=1385KiB/s (1419kB/s)(13.6MiB/10048msec) 00:30:34.648 slat (usec): min=4, max=4028, avg=14.33, stdev=107.75 00:30:34.648 clat (msec): min=2, max=108, avg=46.05, stdev=14.93 00:30:34.648 lat (msec): min=2, max=108, avg=46.06, stdev=14.93 00:30:34.648 clat percentiles (msec): 00:30:34.648 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 34], 00:30:34.648 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 46], 60.00th=[ 48], 00:30:34.648 | 70.00th=[ 51], 80.00th=[ 57], 90.00th=[ 64], 95.00th=[ 72], 00:30:34.648 | 99.00th=[ 94], 99.50th=[ 95], 99.90th=[ 109], 99.95th=[ 109], 00:30:34.648 | 99.99th=[ 109] 00:30:34.648 bw ( KiB/s): min= 848, max= 1712, per=4.42%, avg=1385.10, stdev=225.93, samples=20 00:30:34.648 iops : min= 212, max= 428, avg=346.25, stdev=56.51, samples=20 00:30:34.648 lat (msec) : 4=0.80%, 10=0.57%, 20=0.26%, 50=66.81%, 100=31.44% 00:30:34.648 lat (msec) : 250=0.11% 00:30:34.648 cpu : usr=42.25%, sys=0.49%, ctx=1164, majf=0, minf=9 00:30:34.648 IO depths : 1=1.5%, 2=3.2%, 4=10.5%, 8=72.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename0: (groupid=0, jobs=1): err= 0: pid=92121: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=313, BW=1252KiB/s (1283kB/s)(12.3MiB/10025msec) 00:30:34.648 slat (usec): min=2, max=8037, avg=21.78, stdev=229.46 00:30:34.648 clat (usec): min=23019, max=99163, avg=50900.59, stdev=13107.39 00:30:34.648 lat (usec): min=23025, max=99168, avg=50922.37, stdev=13106.40 00:30:34.648 clat percentiles (usec): 00:30:34.648 | 1.00th=[25297], 5.00th=[33817], 10.00th=[36963], 20.00th=[40633], 00:30:34.648 | 30.00th=[43779], 40.00th=[46924], 50.00th=[47973], 60.00th=[50070], 00:30:34.648 | 70.00th=[55837], 80.00th=[60031], 90.00th=[65799], 95.00th=[74974], 00:30:34.648 | 99.00th=[95945], 99.50th=[99091], 99.90th=[99091], 99.95th=[99091], 00:30:34.648 | 99.99th=[99091] 00:30:34.648 bw ( KiB/s): min= 784, max= 1440, per=3.99%, avg=1249.30, stdev=145.82, samples=20 00:30:34.648 iops : min= 196, max= 360, avg=312.30, stdev=36.47, samples=20 00:30:34.648 lat (msec) : 50=59.70%, 100=40.30% 00:30:34.648 cpu : usr=41.24%, sys=0.39%, ctx=1066, majf=0, minf=9 00:30:34.648 IO depths : 1=1.9%, 2=4.5%, 4=13.6%, 8=67.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=91.3%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename0: (groupid=0, jobs=1): err= 0: pid=92122: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=328, BW=1313KiB/s (1345kB/s)(12.9MiB/10048msec) 00:30:34.648 slat (usec): min=4, max=8030, avg=16.20, stdev=185.55 00:30:34.648 clat (msec): min=6, max=111, avg=48.54, stdev=14.79 00:30:34.648 lat (msec): min=6, max=111, avg=48.55, stdev=14.79 00:30:34.648 clat percentiles (msec): 00:30:34.648 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 37], 00:30:34.648 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 48], 00:30:34.648 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 72], 00:30:34.648 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 112], 99.95th=[ 112], 00:30:34.648 | 99.99th=[ 112] 00:30:34.648 bw ( KiB/s): min= 768, max= 1664, per=4.19%, avg=1313.05, stdev=197.51, samples=20 00:30:34.648 iops : min= 192, max= 416, avg=328.25, stdev=49.39, samples=20 00:30:34.648 lat (msec) : 10=0.48%, 20=1.18%, 50=64.26%, 100=33.40%, 250=0.67% 00:30:34.648 cpu : usr=33.46%, sys=0.31%, ctx=914, majf=0, minf=9 00:30:34.648 IO depths : 1=0.9%, 2=2.1%, 4=9.2%, 8=75.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename1: (groupid=0, jobs=1): err= 0: pid=92123: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=320, BW=1283KiB/s (1314kB/s)(12.6MiB/10029msec) 00:30:34.648 slat (usec): min=4, max=8061, avg=20.82, stdev=283.19 00:30:34.648 clat (msec): min=12, max=116, avg=49.71, stdev=15.76 00:30:34.648 lat (msec): min=12, max=116, avg=49.73, stdev=15.76 00:30:34.648 clat percentiles (msec): 00:30:34.648 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 36], 00:30:34.648 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.648 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 71], 95.00th=[ 77], 00:30:34.648 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 116], 99.95th=[ 116], 00:30:34.648 | 99.99th=[ 116] 00:30:34.648 bw ( KiB/s): min= 864, max= 1677, per=4.08%, avg=1279.70, stdev=173.22, samples=20 00:30:34.648 iops : min= 216, max= 419, avg=319.90, stdev=43.29, samples=20 00:30:34.648 lat (msec) : 20=0.68%, 50=60.43%, 100=37.89%, 250=0.99% 00:30:34.648 cpu : usr=33.25%, sys=0.26%, ctx=896, majf=0, minf=9 00:30:34.648 IO depths : 1=1.4%, 2=3.1%, 4=11.2%, 8=72.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename1: (groupid=0, jobs=1): err= 0: pid=92124: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=310, BW=1244KiB/s (1273kB/s)(12.1MiB/10003msec) 00:30:34.648 slat (usec): min=2, max=10041, avg=24.59, stdev=295.71 00:30:34.648 clat (msec): min=2, max=122, avg=51.33, stdev=15.28 00:30:34.648 lat (msec): min=2, max=122, avg=51.35, stdev=15.28 00:30:34.648 clat percentiles (msec): 00:30:34.648 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 40], 00:30:34.648 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 54], 00:30:34.648 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 81], 00:30:34.648 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:30:34.648 | 99.99th=[ 124] 00:30:34.648 bw ( KiB/s): min= 896, max= 1424, per=3.92%, avg=1229.79, stdev=156.20, samples=19 00:30:34.648 iops : min= 224, max= 356, avg=307.42, stdev=39.06, samples=19 00:30:34.648 lat (msec) : 4=0.48%, 10=0.39%, 20=0.48%, 50=55.92%, 100=42.41% 00:30:34.648 lat (msec) : 250=0.32% 00:30:34.648 cpu : usr=36.03%, sys=0.57%, ctx=1056, majf=0, minf=9 00:30:34.648 IO depths : 1=1.9%, 2=4.1%, 4=12.6%, 8=69.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:34.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.648 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.648 filename1: (groupid=0, jobs=1): err= 0: pid=92125: Wed Nov 20 11:58:05 2024 00:30:34.648 read: IOPS=319, BW=1276KiB/s (1307kB/s)(12.5MiB/10026msec) 00:30:34.648 slat (usec): min=2, max=8028, avg=20.07, stdev=255.40 00:30:34.649 clat (msec): min=20, max=121, avg=49.99, stdev=13.81 00:30:34.649 lat (msec): min=20, max=121, avg=50.01, stdev=13.81 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 37], 00:30:34.649 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.649 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 75], 00:30:34.649 | 99.00th=[ 89], 99.50th=[ 99], 99.90th=[ 123], 99.95th=[ 123], 00:30:34.649 | 99.99th=[ 123] 00:30:34.649 bw ( KiB/s): min= 976, max= 1584, per=4.07%, avg=1275.20, stdev=163.67, samples=20 00:30:34.649 iops : min= 244, max= 396, avg=318.75, stdev=40.93, samples=20 00:30:34.649 lat (msec) : 50=60.71%, 100=38.98%, 250=0.31% 00:30:34.649 cpu : usr=35.87%, sys=0.23%, ctx=1072, majf=0, minf=9 00:30:34.649 IO depths : 1=1.4%, 2=3.1%, 4=11.1%, 8=72.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename1: (groupid=0, jobs=1): err= 0: pid=92126: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=342, BW=1371KiB/s (1404kB/s)(13.4MiB/10033msec) 00:30:34.649 slat (usec): min=5, max=8020, avg=16.36, stdev=193.76 00:30:34.649 clat (usec): min=1862, max=123975, avg=46551.19, stdev=15463.66 00:30:34.649 lat (usec): min=1878, max=123981, avg=46567.55, stdev=15469.41 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 35], 00:30:34.649 | 30.00th=[ 37], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 48], 00:30:34.649 | 70.00th=[ 52], 80.00th=[ 61], 90.00th=[ 65], 95.00th=[ 72], 00:30:34.649 | 99.00th=[ 86], 99.50th=[ 92], 99.90th=[ 125], 99.95th=[ 125], 00:30:34.649 | 99.99th=[ 125] 00:30:34.649 bw ( KiB/s): min= 1040, max= 2452, per=4.37%, avg=1369.40, stdev=291.40, samples=20 00:30:34.649 iops : min= 260, max= 613, avg=342.35, stdev=72.85, samples=20 00:30:34.649 lat (msec) : 2=0.20%, 4=0.26%, 10=1.40%, 20=0.93%, 50=64.70% 00:30:34.649 lat (msec) : 100=32.19%, 250=0.32% 00:30:34.649 cpu : usr=34.06%, sys=0.28%, ctx=1011, majf=0, minf=0 00:30:34.649 IO depths : 1=1.1%, 2=2.6%, 4=10.4%, 8=73.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename1: (groupid=0, jobs=1): err= 0: pid=92127: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=305, BW=1221KiB/s (1251kB/s)(11.9MiB/10015msec) 00:30:34.649 slat (usec): min=2, max=12024, avg=20.57, stdev=298.60 00:30:34.649 clat (msec): min=22, max=114, avg=52.29, stdev=14.92 00:30:34.649 lat (msec): min=22, max=114, avg=52.31, stdev=14.93 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 00:30:34.649 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 54], 00:30:34.649 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 82], 00:30:34.649 | 99.00th=[ 96], 99.50th=[ 104], 99.90th=[ 114], 99.95th=[ 114], 00:30:34.649 | 99.99th=[ 114] 00:30:34.649 bw ( KiB/s): min= 768, max= 1584, per=3.88%, avg=1216.50, stdev=184.78, samples=20 00:30:34.649 iops : min= 192, max= 396, avg=304.10, stdev=46.20, samples=20 00:30:34.649 lat (msec) : 50=53.76%, 100=45.65%, 250=0.59% 00:30:34.649 cpu : usr=31.80%, sys=0.52%, ctx=991, majf=0, minf=9 00:30:34.649 IO depths : 1=1.8%, 2=4.0%, 4=13.2%, 8=69.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename1: (groupid=0, jobs=1): err= 0: pid=92128: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=335, BW=1340KiB/s (1372kB/s)(13.1MiB/10026msec) 00:30:34.649 slat (usec): min=5, max=8034, avg=27.31, stdev=322.77 00:30:34.649 clat (msec): min=17, max=112, avg=47.53, stdev=15.67 00:30:34.649 lat (msec): min=17, max=112, avg=47.55, stdev=15.67 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 35], 00:30:34.649 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 46], 60.00th=[ 48], 00:30:34.649 | 70.00th=[ 51], 80.00th=[ 59], 90.00th=[ 67], 95.00th=[ 81], 00:30:34.649 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 113], 99.95th=[ 113], 00:30:34.649 | 99.99th=[ 113] 00:30:34.649 bw ( KiB/s): min= 736, max= 1852, per=4.28%, avg=1342.05, stdev=261.82, samples=20 00:30:34.649 iops : min= 184, max= 463, avg=335.50, stdev=65.45, samples=20 00:30:34.649 lat (msec) : 20=0.54%, 50=69.25%, 100=28.88%, 250=1.34% 00:30:34.649 cpu : usr=42.00%, sys=0.49%, ctx=1304, majf=0, minf=9 00:30:34.649 IO depths : 1=2.0%, 2=4.6%, 4=13.5%, 8=68.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename1: (groupid=0, jobs=1): err= 0: pid=92129: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=322, BW=1289KiB/s (1320kB/s)(12.6MiB/10007msec) 00:30:34.649 slat (usec): min=2, max=16035, avg=27.29, stdev=390.15 00:30:34.649 clat (msec): min=16, max=117, avg=49.49, stdev=13.69 00:30:34.649 lat (msec): min=16, max=117, avg=49.52, stdev=13.69 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 41], 00:30:34.649 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.649 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 74], 00:30:34.649 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 118], 99.95th=[ 118], 00:30:34.649 | 99.99th=[ 118] 00:30:34.649 bw ( KiB/s): min= 816, max= 1600, per=4.10%, avg=1283.05, stdev=156.62, samples=20 00:30:34.649 iops : min= 204, max= 400, avg=320.75, stdev=39.15, samples=20 00:30:34.649 lat (msec) : 20=0.50%, 50=61.54%, 100=37.69%, 250=0.28% 00:30:34.649 cpu : usr=44.46%, sys=0.43%, ctx=1226, majf=0, minf=9 00:30:34.649 IO depths : 1=1.7%, 2=4.1%, 4=12.2%, 8=70.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename1: (groupid=0, jobs=1): err= 0: pid=92130: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=307, BW=1229KiB/s (1259kB/s)(12.0MiB/10006msec) 00:30:34.649 slat (usec): min=3, max=12041, avg=15.48, stdev=217.12 00:30:34.649 clat (msec): min=16, max=111, avg=51.93, stdev=15.18 00:30:34.649 lat (msec): min=16, max=111, avg=51.95, stdev=15.18 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 41], 00:30:34.649 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 53], 00:30:34.649 | 70.00th=[ 58], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 81], 00:30:34.649 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 112], 99.95th=[ 112], 00:30:34.649 | 99.99th=[ 112] 00:30:34.649 bw ( KiB/s): min= 768, max= 1584, per=3.91%, avg=1225.95, stdev=195.14, samples=20 00:30:34.649 iops : min= 192, max= 396, avg=306.45, stdev=48.80, samples=20 00:30:34.649 lat (msec) : 20=0.46%, 50=55.19%, 100=44.23%, 250=0.13% 00:30:34.649 cpu : usr=38.80%, sys=0.40%, ctx=1135, majf=0, minf=9 00:30:34.649 IO depths : 1=1.3%, 2=3.2%, 4=11.3%, 8=71.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=90.6%, 8=5.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename2: (groupid=0, jobs=1): err= 0: pid=92131: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=317, BW=1271KiB/s (1301kB/s)(12.4MiB/10027msec) 00:30:34.649 slat (usec): min=5, max=9021, avg=24.70, stdev=325.79 00:30:34.649 clat (msec): min=17, max=127, avg=50.20, stdev=15.46 00:30:34.649 lat (msec): min=17, max=127, avg=50.22, stdev=15.46 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 37], 00:30:34.649 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.649 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 79], 00:30:34.649 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 126], 99.95th=[ 126], 00:30:34.649 | 99.99th=[ 128] 00:30:34.649 bw ( KiB/s): min= 808, max= 1584, per=4.04%, avg=1267.05, stdev=176.66, samples=20 00:30:34.649 iops : min= 202, max= 396, avg=316.75, stdev=44.16, samples=20 00:30:34.649 lat (msec) : 20=0.50%, 50=61.51%, 100=36.58%, 250=1.41% 00:30:34.649 cpu : usr=31.69%, sys=0.24%, ctx=952, majf=0, minf=9 00:30:34.649 IO depths : 1=1.1%, 2=2.8%, 4=11.1%, 8=72.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.649 issued rwts: total=3185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.649 filename2: (groupid=0, jobs=1): err= 0: pid=92132: Wed Nov 20 11:58:05 2024 00:30:34.649 read: IOPS=351, BW=1407KiB/s (1440kB/s)(13.8MiB/10044msec) 00:30:34.649 slat (usec): min=5, max=9047, avg=19.40, stdev=216.84 00:30:34.649 clat (msec): min=10, max=103, avg=45.33, stdev=13.62 00:30:34.649 lat (msec): min=10, max=103, avg=45.35, stdev=13.62 00:30:34.649 clat percentiles (msec): 00:30:34.649 | 1.00th=[ 21], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 35], 00:30:34.649 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 46], 60.00th=[ 48], 00:30:34.649 | 70.00th=[ 50], 80.00th=[ 56], 90.00th=[ 63], 95.00th=[ 71], 00:30:34.649 | 99.00th=[ 87], 99.50th=[ 89], 99.90th=[ 104], 99.95th=[ 104], 00:30:34.649 | 99.99th=[ 104] 00:30:34.649 bw ( KiB/s): min= 1096, max= 1667, per=4.49%, avg=1406.40, stdev=158.95, samples=20 00:30:34.649 iops : min= 274, max= 416, avg=351.55, stdev=39.69, samples=20 00:30:34.650 lat (msec) : 20=0.91%, 50=73.02%, 100=25.76%, 250=0.31% 00:30:34.650 cpu : usr=33.65%, sys=0.75%, ctx=1182, majf=0, minf=9 00:30:34.650 IO depths : 1=1.0%, 2=2.4%, 4=9.9%, 8=74.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 filename2: (groupid=0, jobs=1): err= 0: pid=92133: Wed Nov 20 11:58:05 2024 00:30:34.650 read: IOPS=334, BW=1337KiB/s (1369kB/s)(13.1MiB/10001msec) 00:30:34.650 slat (usec): min=2, max=4027, avg=15.93, stdev=132.15 00:30:34.650 clat (usec): min=653, max=111829, avg=47743.69, stdev=16108.30 00:30:34.650 lat (usec): min=658, max=111835, avg=47759.62, stdev=16109.87 00:30:34.650 clat percentiles (usec): 00:30:34.650 | 1.00th=[ 1139], 5.00th=[ 24773], 10.00th=[ 30540], 20.00th=[ 35914], 00:30:34.650 | 30.00th=[ 40633], 40.00th=[ 44303], 50.00th=[ 46924], 60.00th=[ 47973], 00:30:34.650 | 70.00th=[ 53216], 80.00th=[ 60031], 90.00th=[ 67634], 95.00th=[ 76022], 00:30:34.650 | 99.00th=[ 92799], 99.50th=[ 95945], 99.90th=[105382], 99.95th=[111674], 00:30:34.650 | 99.99th=[111674] 00:30:34.650 bw ( KiB/s): min= 842, max= 1664, per=4.13%, avg=1295.42, stdev=217.92, samples=19 00:30:34.650 iops : min= 210, max= 416, avg=323.79, stdev=54.59, samples=19 00:30:34.650 lat (usec) : 750=0.06%, 1000=0.21% 00:30:34.650 lat (msec) : 2=1.44%, 4=0.42%, 10=0.54%, 50=61.86%, 100=35.15% 00:30:34.650 lat (msec) : 250=0.33% 00:30:34.650 cpu : usr=41.14%, sys=0.42%, ctx=1243, majf=0, minf=9 00:30:34.650 IO depths : 1=1.7%, 2=4.2%, 4=12.3%, 8=70.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 filename2: (groupid=0, jobs=1): err= 0: pid=92134: Wed Nov 20 11:58:05 2024 00:30:34.650 read: IOPS=304, BW=1220KiB/s (1249kB/s)(11.9MiB/10002msec) 00:30:34.650 slat (usec): min=2, max=12017, avg=18.17, stdev=261.62 00:30:34.650 clat (msec): min=2, max=136, avg=52.36, stdev=16.66 00:30:34.650 lat (msec): min=2, max=136, avg=52.38, stdev=16.66 00:30:34.650 clat percentiles (msec): 00:30:34.650 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 39], 00:30:34.650 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 53], 00:30:34.650 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 84], 00:30:34.650 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 138], 99.95th=[ 138], 00:30:34.650 | 99.99th=[ 138] 00:30:34.650 bw ( KiB/s): min= 763, max= 1488, per=3.84%, avg=1202.53, stdev=188.92, samples=19 00:30:34.650 iops : min= 190, max= 372, avg=300.58, stdev=47.32, samples=19 00:30:34.650 lat (msec) : 4=0.13%, 10=0.46%, 20=0.46%, 50=55.84%, 100=41.25% 00:30:34.650 lat (msec) : 250=1.87% 00:30:34.650 cpu : usr=32.53%, sys=0.31%, ctx=885, majf=0, minf=9 00:30:34.650 IO depths : 1=1.4%, 2=3.5%, 4=12.5%, 8=70.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=90.8%, 8=4.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 filename2: (groupid=0, jobs=1): err= 0: pid=92135: Wed Nov 20 11:58:05 2024 00:30:34.650 read: IOPS=317, BW=1272KiB/s (1302kB/s)(12.4MiB/10016msec) 00:30:34.650 slat (usec): min=2, max=7480, avg=18.62, stdev=188.49 00:30:34.650 clat (msec): min=20, max=124, avg=50.16, stdev=15.02 00:30:34.650 lat (msec): min=20, max=124, avg=50.18, stdev=15.02 00:30:34.650 clat percentiles (msec): 00:30:34.650 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 40], 00:30:34.650 | 30.00th=[ 43], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.650 | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 81], 00:30:34.650 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 122], 99.95th=[ 122], 00:30:34.650 | 99.99th=[ 126] 00:30:34.650 bw ( KiB/s): min= 656, max= 1536, per=4.05%, avg=1269.85, stdev=204.98, samples=20 00:30:34.650 iops : min= 164, max= 384, avg=317.40, stdev=51.24, samples=20 00:30:34.650 lat (msec) : 50=62.31%, 100=36.75%, 250=0.94% 00:30:34.650 cpu : usr=40.50%, sys=0.53%, ctx=1219, majf=0, minf=9 00:30:34.650 IO depths : 1=2.0%, 2=4.6%, 4=13.6%, 8=68.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 filename2: (groupid=0, jobs=1): err= 0: pid=92136: Wed Nov 20 11:58:05 2024 00:30:34.650 read: IOPS=304, BW=1219KiB/s (1248kB/s)(11.9MiB/10011msec) 00:30:34.650 slat (usec): min=2, max=4014, avg=13.48, stdev=73.21 00:30:34.650 clat (msec): min=15, max=123, avg=52.42, stdev=16.03 00:30:34.650 lat (msec): min=15, max=123, avg=52.43, stdev=16.03 00:30:34.650 clat percentiles (msec): 00:30:34.650 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:30:34.650 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 54], 00:30:34.650 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 71], 95.00th=[ 81], 00:30:34.650 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 124], 99.95th=[ 124], 00:30:34.650 | 99.99th=[ 124] 00:30:34.650 bw ( KiB/s): min= 768, max= 1584, per=3.87%, avg=1213.20, stdev=196.91, samples=20 00:30:34.650 iops : min= 192, max= 396, avg=303.25, stdev=49.26, samples=20 00:30:34.650 lat (msec) : 20=0.16%, 50=52.21%, 100=45.43%, 250=2.20% 00:30:34.650 cpu : usr=39.57%, sys=0.40%, ctx=1432, majf=0, minf=9 00:30:34.650 IO depths : 1=0.9%, 2=2.3%, 4=9.4%, 8=74.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=90.2%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 filename2: (groupid=0, jobs=1): err= 0: pid=92137: Wed Nov 20 11:58:05 2024 00:30:34.650 read: IOPS=325, BW=1301KiB/s (1332kB/s)(12.7MiB/10008msec) 00:30:34.650 slat (usec): min=2, max=12032, avg=22.89, stdev=286.39 00:30:34.650 clat (msec): min=7, max=125, avg=49.04, stdev=14.15 00:30:34.650 lat (msec): min=7, max=125, avg=49.06, stdev=14.15 00:30:34.650 clat percentiles (msec): 00:30:34.650 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 39], 00:30:34.650 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:30:34.650 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 73], 00:30:34.650 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 126], 99.95th=[ 126], 00:30:34.650 | 99.99th=[ 126] 00:30:34.650 bw ( KiB/s): min= 912, max= 1584, per=4.14%, avg=1296.05, stdev=178.62, samples=19 00:30:34.650 iops : min= 228, max= 396, avg=324.00, stdev=44.68, samples=19 00:30:34.650 lat (msec) : 10=0.06%, 20=0.55%, 50=62.46%, 100=36.65%, 250=0.28% 00:30:34.650 cpu : usr=44.08%, sys=0.34%, ctx=1202, majf=0, minf=9 00:30:34.650 IO depths : 1=1.5%, 2=3.3%, 4=11.5%, 8=71.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 filename2: (groupid=0, jobs=1): err= 0: pid=92138: Wed Nov 20 11:58:05 2024 00:30:34.650 read: IOPS=343, BW=1373KiB/s (1405kB/s)(13.5MiB/10040msec) 00:30:34.650 slat (usec): min=5, max=8032, avg=19.85, stdev=226.72 00:30:34.650 clat (msec): min=7, max=112, avg=46.48, stdev=14.51 00:30:34.650 lat (msec): min=7, max=112, avg=46.50, stdev=14.51 00:30:34.650 clat percentiles (msec): 00:30:34.650 | 1.00th=[ 15], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 35], 00:30:34.650 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 46], 60.00th=[ 48], 00:30:34.650 | 70.00th=[ 53], 80.00th=[ 59], 90.00th=[ 66], 95.00th=[ 71], 00:30:34.650 | 99.00th=[ 90], 99.50th=[ 104], 99.90th=[ 113], 99.95th=[ 113], 00:30:34.650 | 99.99th=[ 113] 00:30:34.650 bw ( KiB/s): min= 942, max= 1736, per=4.38%, avg=1371.20, stdev=230.11, samples=20 00:30:34.650 iops : min= 235, max= 434, avg=342.75, stdev=57.57, samples=20 00:30:34.650 lat (msec) : 10=0.41%, 20=1.22%, 50=65.31%, 100=32.45%, 250=0.61% 00:30:34.650 cpu : usr=38.53%, sys=0.44%, ctx=1110, majf=0, minf=9 00:30:34.650 IO depths : 1=1.5%, 2=3.7%, 4=12.0%, 8=71.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.650 issued rwts: total=3445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.650 00:30:34.650 Run status group 0 (all jobs): 00:30:34.650 READ: bw=30.6MiB/s (32.1MB/s), 1219KiB/s-1537KiB/s (1248kB/s-1574kB/s), io=307MiB (322MB), run=10001-10048msec 00:30:34.650 11:58:05 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:34.650 11:58:05 -- target/dif.sh@43 -- # local sub 00:30:34.650 11:58:05 -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.650 11:58:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:34.650 11:58:05 -- target/dif.sh@36 -- # local sub_id=0 00:30:34.650 11:58:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:34.650 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.650 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.650 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.650 11:58:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:34.650 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.650 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.650 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.650 11:58:05 -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.650 11:58:05 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:34.650 11:58:05 -- target/dif.sh@36 -- # local sub_id=1 00:30:34.650 11:58:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:34.650 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.650 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.650 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.651 11:58:05 -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:34.651 11:58:05 -- target/dif.sh@36 -- # local sub_id=2 00:30:34.651 11:58:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@115 -- # NULL_DIF=1 00:30:34.651 11:58:05 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:34.651 11:58:05 -- target/dif.sh@115 -- # numjobs=2 00:30:34.651 11:58:05 -- target/dif.sh@115 -- # iodepth=8 00:30:34.651 11:58:05 -- target/dif.sh@115 -- # runtime=5 00:30:34.651 11:58:05 -- target/dif.sh@115 -- # files=1 00:30:34.651 11:58:05 -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:34.651 11:58:05 -- target/dif.sh@28 -- # local sub 00:30:34.651 11:58:05 -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.651 11:58:05 -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.651 11:58:05 -- target/dif.sh@18 -- # local sub_id=0 00:30:34.651 11:58:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 bdev_null0 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 [2024-11-20 11:58:05.927545] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.651 11:58:05 -- target/dif.sh@31 -- # create_subsystem 1 00:30:34.651 11:58:05 -- target/dif.sh@18 -- # local sub_id=1 00:30:34.651 11:58:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 bdev_null1 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.651 11:58:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 11:58:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 11:58:05 -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:34.651 11:58:05 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:34.651 11:58:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:34.651 11:58:05 -- nvmf/common.sh@520 -- # config=() 00:30:34.651 11:58:05 -- nvmf/common.sh@520 -- # local subsystem config 00:30:34.651 11:58:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.651 11:58:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:34.651 11:58:05 -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.651 11:58:05 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.651 11:58:05 -- target/dif.sh@54 -- # local file 00:30:34.651 11:58:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:34.651 { 00:30:34.651 "params": { 00:30:34.651 "name": "Nvme$subsystem", 00:30:34.651 "trtype": "$TEST_TRANSPORT", 00:30:34.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.651 "adrfam": "ipv4", 00:30:34.651 "trsvcid": "$NVMF_PORT", 00:30:34.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.651 "hdgst": ${hdgst:-false}, 00:30:34.651 "ddgst": ${ddgst:-false} 00:30:34.651 }, 00:30:34.651 "method": "bdev_nvme_attach_controller" 00:30:34.651 } 00:30:34.651 EOF 00:30:34.651 )") 00:30:34.651 11:58:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:34.651 11:58:05 -- target/dif.sh@56 -- # cat 00:30:34.651 11:58:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.651 11:58:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:34.651 11:58:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:34.651 11:58:05 -- common/autotest_common.sh@1330 -- # shift 00:30:34.651 11:58:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:34.651 11:58:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.651 11:58:05 -- nvmf/common.sh@542 -- # cat 00:30:34.651 11:58:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:34.651 11:58:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.651 11:58:05 -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.651 11:58:05 -- target/dif.sh@73 -- # cat 00:30:34.651 11:58:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:34.651 11:58:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:34.651 11:58:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:34.651 11:58:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:34.651 { 00:30:34.651 "params": { 00:30:34.651 "name": "Nvme$subsystem", 00:30:34.651 "trtype": "$TEST_TRANSPORT", 00:30:34.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.651 "adrfam": "ipv4", 00:30:34.651 "trsvcid": "$NVMF_PORT", 00:30:34.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.651 "hdgst": ${hdgst:-false}, 00:30:34.651 "ddgst": ${ddgst:-false} 00:30:34.651 }, 00:30:34.651 "method": "bdev_nvme_attach_controller" 00:30:34.651 } 00:30:34.651 EOF 00:30:34.651 )") 00:30:34.651 11:58:05 -- nvmf/common.sh@542 -- # cat 00:30:34.651 11:58:05 -- target/dif.sh@72 -- # (( file++ )) 00:30:34.651 11:58:05 -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.651 11:58:05 -- nvmf/common.sh@544 -- # jq . 00:30:34.651 11:58:05 -- nvmf/common.sh@545 -- # IFS=, 00:30:34.651 11:58:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:34.651 "params": { 00:30:34.651 "name": "Nvme0", 00:30:34.651 "trtype": "tcp", 00:30:34.651 "traddr": "10.0.0.2", 00:30:34.651 "adrfam": "ipv4", 00:30:34.651 "trsvcid": "4420", 00:30:34.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.651 "hdgst": false, 00:30:34.651 "ddgst": false 00:30:34.651 }, 00:30:34.651 "method": "bdev_nvme_attach_controller" 00:30:34.651 },{ 00:30:34.651 "params": { 00:30:34.651 "name": "Nvme1", 00:30:34.651 "trtype": "tcp", 00:30:34.651 "traddr": "10.0.0.2", 00:30:34.651 "adrfam": "ipv4", 00:30:34.651 "trsvcid": "4420", 00:30:34.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.651 "hdgst": false, 00:30:34.651 "ddgst": false 00:30:34.651 }, 00:30:34.651 "method": "bdev_nvme_attach_controller" 00:30:34.651 }' 00:30:34.651 11:58:06 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:34.651 11:58:06 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:34.651 11:58:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.651 11:58:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:34.651 11:58:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:34.651 11:58:06 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:34.651 11:58:06 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:34.651 11:58:06 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:34.651 11:58:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:34.651 11:58:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.651 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:34.651 ... 00:30:34.651 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:34.651 ... 00:30:34.651 fio-3.35 00:30:34.651 Starting 4 threads 00:30:34.651 [2024-11-20 11:58:06.702755] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:34.651 [2024-11-20 11:58:06.702798] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:38.848 00:30:38.848 filename0: (groupid=0, jobs=1): err= 0: pid=92275: Wed Nov 20 11:58:11 2024 00:30:38.848 read: IOPS=2733, BW=21.4MiB/s (22.4MB/s)(107MiB/5003msec) 00:30:38.848 slat (nsec): min=2640, max=98450, avg=14569.00, stdev=11687.09 00:30:38.848 clat (usec): min=727, max=8004, avg=2850.13, stdev=229.99 00:30:38.848 lat (usec): min=733, max=8015, avg=2864.69, stdev=231.62 00:30:38.848 clat percentiles (usec): 00:30:38.848 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:30:38.848 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:30:38.848 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:30:38.848 | 99.00th=[ 3261], 99.50th=[ 3458], 99.90th=[ 4359], 99.95th=[ 7898], 00:30:38.848 | 99.99th=[ 7898] 00:30:38.848 bw ( KiB/s): min=21376, max=22400, per=25.08%, avg=21879.56, stdev=349.75, samples=9 00:30:38.848 iops : min= 2672, max= 2800, avg=2734.89, stdev=43.76, samples=9 00:30:38.848 lat (usec) : 750=0.03%, 1000=0.38% 00:30:38.848 lat (msec) : 2=0.18%, 4=99.20%, 10=0.21% 00:30:38.848 cpu : usr=96.68%, sys=2.40%, ctx=4, majf=0, minf=0 00:30:38.848 IO depths : 1=9.8%, 2=24.6%, 4=50.4%, 8=15.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.848 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.848 issued rwts: total=13677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:38.848 filename0: (groupid=0, jobs=1): err= 0: pid=92276: Wed Nov 20 11:58:11 2024 00:30:38.848 read: IOPS=2727, BW=21.3MiB/s (22.3MB/s)(107MiB/5002msec) 00:30:38.848 slat (nsec): min=5189, max=81778, avg=15551.94, stdev=11447.98 00:30:38.848 clat (usec): min=758, max=4587, avg=2871.29, stdev=179.78 00:30:38.848 lat (usec): min=764, max=4629, avg=2886.84, stdev=178.08 00:30:38.848 clat percentiles (usec): 00:30:38.848 | 1.00th=[ 2311], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:30:38.848 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:30:38.848 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3097], 00:30:38.848 | 99.00th=[ 3392], 99.50th=[ 3523], 99.90th=[ 4228], 99.95th=[ 4490], 00:30:38.848 | 99.99th=[ 4555] 00:30:38.848 bw ( KiB/s): min=21248, max=22272, per=25.01%, avg=21812.44, stdev=298.23, samples=9 00:30:38.848 iops : min= 2656, max= 2784, avg=2726.44, stdev=37.29, samples=9 00:30:38.848 lat (usec) : 1000=0.06% 00:30:38.848 lat (msec) : 2=0.13%, 4=99.63%, 10=0.18% 00:30:38.848 cpu : usr=95.36%, sys=3.36%, ctx=5, majf=0, minf=9 00:30:38.848 IO depths : 1=3.7%, 2=22.4%, 4=52.6%, 8=21.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.848 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.848 issued rwts: total=13643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:38.848 filename1: (groupid=0, jobs=1): err= 0: pid=92277: Wed Nov 20 11:58:11 2024 00:30:38.848 read: IOPS=2721, BW=21.3MiB/s (22.3MB/s)(106MiB/5001msec) 00:30:38.848 slat (usec): min=5, max=310, avg=18.89, stdev=13.53 00:30:38.848 clat (usec): min=672, max=5930, avg=2848.83, stdev=202.46 00:30:38.848 lat (usec): min=682, max=5967, avg=2867.72, stdev=203.14 00:30:38.848 clat percentiles (usec): 00:30:38.848 | 1.00th=[ 2376], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:30:38.848 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:30:38.848 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:30:38.848 | 99.00th=[ 3359], 99.50th=[ 3851], 99.90th=[ 4948], 99.95th=[ 5538], 00:30:38.848 | 99.99th=[ 5866] 00:30:38.848 bw ( KiB/s): min=21248, max=22288, per=24.98%, avg=21786.56, stdev=297.02, samples=9 00:30:38.848 iops : min= 2656, max= 2786, avg=2723.22, stdev=37.17, samples=9 00:30:38.848 lat (usec) : 750=0.01% 00:30:38.848 lat (msec) : 2=0.35%, 4=99.21%, 10=0.44% 00:30:38.848 cpu : usr=97.24%, sys=1.80%, ctx=66, majf=0, minf=9 00:30:38.848 IO depths : 1=6.0%, 2=24.4%, 4=50.6%, 8=19.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.849 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.849 issued rwts: total=13611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:38.849 filename1: (groupid=0, jobs=1): err= 0: pid=92278: Wed Nov 20 11:58:11 2024 00:30:38.849 read: IOPS=2723, BW=21.3MiB/s (22.3MB/s)(106MiB/5001msec) 00:30:38.849 slat (nsec): min=5175, max=79059, avg=15900.70, stdev=11387.65 00:30:38.849 clat (usec): min=933, max=4667, avg=2874.25, stdev=222.03 00:30:38.849 lat (usec): min=940, max=4672, avg=2890.15, stdev=220.31 00:30:38.849 clat percentiles (usec): 00:30:38.849 | 1.00th=[ 2180], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:30:38.849 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:30:38.849 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3130], 00:30:38.849 | 99.00th=[ 3589], 99.50th=[ 3982], 99.90th=[ 4490], 99.95th=[ 4555], 00:30:38.849 | 99.99th=[ 4621] 00:30:38.849 bw ( KiB/s): min=21248, max=22272, per=24.98%, avg=21790.22, stdev=319.88, samples=9 00:30:38.849 iops : min= 2656, max= 2784, avg=2723.67, stdev=39.96, samples=9 00:30:38.849 lat (usec) : 1000=0.04% 00:30:38.849 lat (msec) : 2=0.60%, 4=98.86%, 10=0.49% 00:30:38.849 cpu : usr=95.66%, sys=3.14%, ctx=14, majf=0, minf=10 00:30:38.849 IO depths : 1=4.1%, 2=22.2%, 4=52.6%, 8=21.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.849 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.849 issued rwts: total=13620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:38.849 00:30:38.849 Run status group 0 (all jobs): 00:30:38.849 READ: bw=85.2MiB/s (89.3MB/s), 21.3MiB/s-21.4MiB/s (22.3MB/s-22.4MB/s), io=426MiB (447MB), run=5001-5003msec 00:30:39.108 11:58:12 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:39.108 11:58:12 -- target/dif.sh@43 -- # local sub 00:30:39.108 11:58:12 -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.108 11:58:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:39.108 11:58:12 -- target/dif.sh@36 -- # local sub_id=0 00:30:39.108 11:58:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.108 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.108 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.108 11:58:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:39.108 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.108 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.108 11:58:12 -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.108 11:58:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:39.108 11:58:12 -- target/dif.sh@36 -- # local sub_id=1 00:30:39.108 11:58:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.108 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.108 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.108 11:58:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:39.108 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.108 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.108 00:30:39.108 real 0m23.858s 00:30:39.108 user 2m6.392s 00:30:39.108 sys 0m2.863s 00:30:39.108 11:58:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:39.108 ************************************ 00:30:39.108 END TEST fio_dif_rand_params 00:30:39.108 ************************************ 00:30:39.108 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:58:12 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:39.108 11:58:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:39.108 11:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:39.108 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.368 ************************************ 00:30:39.368 START TEST fio_dif_digest 00:30:39.368 ************************************ 00:30:39.368 11:58:12 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:30:39.368 11:58:12 -- target/dif.sh@123 -- # local NULL_DIF 00:30:39.368 11:58:12 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:39.368 11:58:12 -- target/dif.sh@125 -- # local hdgst ddgst 00:30:39.368 11:58:12 -- target/dif.sh@127 -- # NULL_DIF=3 00:30:39.368 11:58:12 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:39.368 11:58:12 -- target/dif.sh@127 -- # numjobs=3 00:30:39.368 11:58:12 -- target/dif.sh@127 -- # iodepth=3 00:30:39.368 11:58:12 -- target/dif.sh@127 -- # runtime=10 00:30:39.368 11:58:12 -- target/dif.sh@128 -- # hdgst=true 00:30:39.368 11:58:12 -- target/dif.sh@128 -- # ddgst=true 00:30:39.368 11:58:12 -- target/dif.sh@130 -- # create_subsystems 0 00:30:39.368 11:58:12 -- target/dif.sh@28 -- # local sub 00:30:39.368 11:58:12 -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.368 11:58:12 -- target/dif.sh@31 -- # create_subsystem 0 00:30:39.368 11:58:12 -- target/dif.sh@18 -- # local sub_id=0 00:30:39.368 11:58:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:39.368 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.368 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.368 bdev_null0 00:30:39.368 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.368 11:58:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:39.368 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.368 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.368 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.368 11:58:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:39.368 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.368 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.368 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.368 11:58:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.368 11:58:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.368 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:30:39.368 [2024-11-20 11:58:12.206592] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.368 11:58:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.368 11:58:12 -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:39.368 11:58:12 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:39.368 11:58:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:39.368 11:58:12 -- nvmf/common.sh@520 -- # config=() 00:30:39.368 11:58:12 -- nvmf/common.sh@520 -- # local subsystem config 00:30:39.368 11:58:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:39.368 11:58:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.368 11:58:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:39.368 { 00:30:39.368 "params": { 00:30:39.368 "name": "Nvme$subsystem", 00:30:39.368 "trtype": "$TEST_TRANSPORT", 00:30:39.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.368 "adrfam": "ipv4", 00:30:39.368 "trsvcid": "$NVMF_PORT", 00:30:39.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.368 "hdgst": ${hdgst:-false}, 00:30:39.368 "ddgst": ${ddgst:-false} 00:30:39.368 }, 00:30:39.368 "method": "bdev_nvme_attach_controller" 00:30:39.368 } 00:30:39.368 EOF 00:30:39.368 )") 00:30:39.368 11:58:12 -- target/dif.sh@82 -- # gen_fio_conf 00:30:39.368 11:58:12 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.368 11:58:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:39.368 11:58:12 -- target/dif.sh@54 -- # local file 00:30:39.368 11:58:12 -- target/dif.sh@56 -- # cat 00:30:39.368 11:58:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:39.368 11:58:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:39.368 11:58:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:39.368 11:58:12 -- common/autotest_common.sh@1330 -- # shift 00:30:39.368 11:58:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:39.368 11:58:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.368 11:58:12 -- nvmf/common.sh@542 -- # cat 00:30:39.368 11:58:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:39.368 11:58:12 -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:39.368 11:58:12 -- nvmf/common.sh@544 -- # jq . 00:30:39.368 11:58:12 -- nvmf/common.sh@545 -- # IFS=, 00:30:39.368 11:58:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:39.368 "params": { 00:30:39.368 "name": "Nvme0", 00:30:39.368 "trtype": "tcp", 00:30:39.368 "traddr": "10.0.0.2", 00:30:39.368 "adrfam": "ipv4", 00:30:39.368 "trsvcid": "4420", 00:30:39.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:39.368 "hdgst": true, 00:30:39.368 "ddgst": true 00:30:39.368 }, 00:30:39.368 "method": "bdev_nvme_attach_controller" 00:30:39.368 }' 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:39.368 11:58:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:39.368 11:58:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:39.368 11:58:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:39.368 11:58:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:39.368 11:58:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:39.368 11:58:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.628 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:39.628 ... 00:30:39.628 fio-3.35 00:30:39.628 Starting 3 threads 00:30:39.886 [2024-11-20 11:58:12.808574] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:39.886 [2024-11-20 11:58:12.808618] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:52.117 00:30:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=92385: Wed Nov 20 11:58:22 2024 00:30:52.117 read: IOPS=309, BW=38.6MiB/s (40.5MB/s)(387MiB/10005msec) 00:30:52.117 slat (nsec): min=5819, max=56597, avg=15044.30, stdev=8290.82 00:30:52.117 clat (usec): min=5183, max=12679, avg=9684.97, stdev=1209.31 00:30:52.117 lat (usec): min=5189, max=12709, avg=9700.02, stdev=1209.45 00:30:52.117 clat percentiles (usec): 00:30:52.117 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 8586], 20.00th=[ 9110], 00:30:52.117 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:30:52.117 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:30:52.117 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12256], 99.95th=[12387], 00:30:52.117 | 99.99th=[12649] 00:30:52.117 bw ( KiB/s): min=36096, max=44544, per=34.97%, avg=39552.00, stdev=1883.96, samples=20 00:30:52.117 iops : min= 282, max= 348, avg=309.00, stdev=14.72, samples=20 00:30:52.117 lat (msec) : 10=57.84%, 20=42.16% 00:30:52.117 cpu : usr=95.87%, sys=3.07%, ctx=114, majf=0, minf=9 00:30:52.117 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.117 issued rwts: total=3093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=92386: Wed Nov 20 11:58:22 2024 00:30:52.117 read: IOPS=322, BW=40.3MiB/s (42.3MB/s)(403MiB/10007msec) 00:30:52.117 slat (nsec): min=5838, max=57245, avg=14938.32, stdev=6846.11 00:30:52.117 clat (usec): min=6741, max=50411, avg=9287.06, stdev=4137.79 00:30:52.117 lat (usec): min=6749, max=50421, avg=9301.99, stdev=4137.68 00:30:52.117 clat percentiles (usec): 00:30:52.117 | 1.00th=[ 7504], 5.00th=[ 7898], 10.00th=[ 8094], 20.00th=[ 8455], 00:30:52.117 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:30:52.117 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:30:52.117 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:30:52.117 | 99.99th=[50594] 00:30:52.117 bw ( KiB/s): min=33536, max=44288, per=36.48%, avg=41254.40, stdev=2818.17, samples=20 00:30:52.117 iops : min= 262, max= 346, avg=322.30, stdev=22.02, samples=20 00:30:52.117 lat (msec) : 10=96.44%, 20=2.54%, 50=0.68%, 100=0.34% 00:30:52.117 cpu : usr=95.03%, sys=3.61%, ctx=10, majf=0, minf=9 00:30:52.117 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.117 issued rwts: total=3226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=92387: Wed Nov 20 11:58:22 2024 00:30:52.117 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(315MiB/10005msec) 00:30:52.117 slat (nsec): min=5837, max=83300, avg=17427.30, stdev=7472.65 00:30:52.117 clat (usec): min=6620, max=15206, avg=11874.59, stdev=1254.18 00:30:52.117 lat (usec): min=6631, max=15228, avg=11892.02, stdev=1255.73 00:30:52.117 clat percentiles (usec): 00:30:52.117 | 1.00th=[ 7308], 5.00th=[ 8160], 10.00th=[11076], 20.00th=[11469], 00:30:52.117 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:30:52.117 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:30:52.117 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14222], 99.95th=[15139], 00:30:52.117 | 99.99th=[15270] 00:30:52.117 bw ( KiB/s): min=30720, max=35840, per=28.52%, avg=32259.20, stdev=1443.45, samples=20 00:30:52.117 iops : min= 240, max= 280, avg=252.00, stdev=11.28, samples=20 00:30:52.117 lat (msec) : 10=7.17%, 20=92.83% 00:30:52.117 cpu : usr=95.54%, sys=3.30%, ctx=102, majf=0, minf=9 00:30:52.117 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.117 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:52.117 00:30:52.117 Run status group 0 (all jobs): 00:30:52.117 READ: bw=110MiB/s (116MB/s), 31.5MiB/s-40.3MiB/s (33.1MB/s-42.3MB/s), io=1105MiB (1159MB), run=10005-10007msec 00:30:52.117 11:58:23 -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:52.117 11:58:23 -- target/dif.sh@43 -- # local sub 00:30:52.117 11:58:23 -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.117 11:58:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:52.117 11:58:23 -- target/dif.sh@36 -- # local sub_id=0 00:30:52.117 11:58:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.117 11:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.117 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:30:52.117 11:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.117 11:58:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:52.118 11:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.118 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:30:52.118 11:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.118 00:30:52.118 real 0m11.029s 00:30:52.118 user 0m29.343s 00:30:52.118 sys 0m1.309s 00:30:52.118 11:58:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:52.118 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:30:52.118 ************************************ 00:30:52.118 END TEST fio_dif_digest 00:30:52.118 ************************************ 00:30:52.118 11:58:23 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:52.118 11:58:23 -- target/dif.sh@147 -- # nvmftestfini 00:30:52.118 11:58:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:52.118 11:58:23 -- nvmf/common.sh@116 -- # sync 00:30:52.118 11:58:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:52.118 11:58:23 -- nvmf/common.sh@119 -- # set +e 00:30:52.118 11:58:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:52.118 11:58:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:52.118 rmmod nvme_tcp 00:30:52.118 rmmod nvme_fabrics 00:30:52.118 rmmod nvme_keyring 00:30:52.118 11:58:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:52.118 11:58:23 -- nvmf/common.sh@123 -- # set -e 00:30:52.118 11:58:23 -- nvmf/common.sh@124 -- # return 0 00:30:52.118 11:58:23 -- nvmf/common.sh@477 -- # '[' -n 91592 ']' 00:30:52.118 11:58:23 -- nvmf/common.sh@478 -- # killprocess 91592 00:30:52.118 11:58:23 -- common/autotest_common.sh@936 -- # '[' -z 91592 ']' 00:30:52.118 11:58:23 -- common/autotest_common.sh@940 -- # kill -0 91592 00:30:52.118 11:58:23 -- common/autotest_common.sh@941 -- # uname 00:30:52.118 11:58:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:52.118 11:58:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91592 00:30:52.118 11:58:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:52.118 11:58:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:52.118 11:58:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91592' 00:30:52.118 killing process with pid 91592 00:30:52.118 11:58:23 -- common/autotest_common.sh@955 -- # kill 91592 00:30:52.118 11:58:23 -- common/autotest_common.sh@960 -- # wait 91592 00:30:52.118 11:58:23 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:30:52.118 11:58:23 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:52.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:52.118 Waiting for block devices as requested 00:30:52.118 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:52.118 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:30:52.118 11:58:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:52.118 11:58:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:52.118 11:58:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.118 11:58:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:52.118 11:58:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.118 11:58:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:52.118 11:58:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.118 11:58:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:52.118 00:30:52.118 real 1m0.604s 00:30:52.118 user 3m54.326s 00:30:52.118 sys 0m11.321s 00:30:52.118 ************************************ 00:30:52.118 END TEST nvmf_dif 00:30:52.118 ************************************ 00:30:52.118 11:58:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:52.118 11:58:24 -- common/autotest_common.sh@10 -- # set +x 00:30:52.118 11:58:24 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:52.118 11:58:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:52.118 11:58:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:52.118 11:58:24 -- common/autotest_common.sh@10 -- # set +x 00:30:52.118 ************************************ 00:30:52.118 START TEST nvmf_abort_qd_sizes 00:30:52.118 ************************************ 00:30:52.118 11:58:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:52.118 * Looking for test storage... 00:30:52.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:52.118 11:58:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:52.118 11:58:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:52.118 11:58:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:52.118 11:58:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:52.118 11:58:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:52.118 11:58:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:52.118 11:58:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:52.118 11:58:24 -- scripts/common.sh@335 -- # IFS=.-: 00:30:52.118 11:58:24 -- scripts/common.sh@335 -- # read -ra ver1 00:30:52.118 11:58:24 -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.118 11:58:24 -- scripts/common.sh@336 -- # read -ra ver2 00:30:52.118 11:58:24 -- scripts/common.sh@337 -- # local 'op=<' 00:30:52.118 11:58:24 -- scripts/common.sh@339 -- # ver1_l=2 00:30:52.118 11:58:24 -- scripts/common.sh@340 -- # ver2_l=1 00:30:52.118 11:58:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:52.118 11:58:24 -- scripts/common.sh@343 -- # case "$op" in 00:30:52.118 11:58:24 -- scripts/common.sh@344 -- # : 1 00:30:52.118 11:58:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:52.118 11:58:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.118 11:58:24 -- scripts/common.sh@364 -- # decimal 1 00:30:52.118 11:58:24 -- scripts/common.sh@352 -- # local d=1 00:30:52.118 11:58:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.118 11:58:24 -- scripts/common.sh@354 -- # echo 1 00:30:52.118 11:58:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:52.118 11:58:24 -- scripts/common.sh@365 -- # decimal 2 00:30:52.118 11:58:24 -- scripts/common.sh@352 -- # local d=2 00:30:52.118 11:58:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.118 11:58:24 -- scripts/common.sh@354 -- # echo 2 00:30:52.118 11:58:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:52.118 11:58:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:52.118 11:58:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:52.118 11:58:24 -- scripts/common.sh@367 -- # return 0 00:30:52.118 11:58:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.118 11:58:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.118 --rc genhtml_branch_coverage=1 00:30:52.118 --rc genhtml_function_coverage=1 00:30:52.118 --rc genhtml_legend=1 00:30:52.118 --rc geninfo_all_blocks=1 00:30:52.118 --rc geninfo_unexecuted_blocks=1 00:30:52.118 00:30:52.118 ' 00:30:52.118 11:58:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.118 --rc genhtml_branch_coverage=1 00:30:52.118 --rc genhtml_function_coverage=1 00:30:52.118 --rc genhtml_legend=1 00:30:52.118 --rc geninfo_all_blocks=1 00:30:52.118 --rc geninfo_unexecuted_blocks=1 00:30:52.118 00:30:52.118 ' 00:30:52.118 11:58:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.118 --rc genhtml_branch_coverage=1 00:30:52.118 --rc genhtml_function_coverage=1 00:30:52.118 --rc genhtml_legend=1 00:30:52.118 --rc geninfo_all_blocks=1 00:30:52.118 --rc geninfo_unexecuted_blocks=1 00:30:52.118 00:30:52.118 ' 00:30:52.118 11:58:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.118 --rc genhtml_branch_coverage=1 00:30:52.118 --rc genhtml_function_coverage=1 00:30:52.118 --rc genhtml_legend=1 00:30:52.118 --rc geninfo_all_blocks=1 00:30:52.118 --rc geninfo_unexecuted_blocks=1 00:30:52.118 00:30:52.118 ' 00:30:52.118 11:58:24 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:52.118 11:58:24 -- nvmf/common.sh@7 -- # uname -s 00:30:52.118 11:58:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.118 11:58:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.118 11:58:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.118 11:58:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.118 11:58:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.118 11:58:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.118 11:58:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.118 11:58:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.118 11:58:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.118 11:58:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.118 11:58:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a 00:30:52.118 11:58:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0f74192-2f63-41a2-a029-58386886737a 00:30:52.118 11:58:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.118 11:58:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.118 11:58:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:52.118 11:58:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:52.118 11:58:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.118 11:58:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.118 11:58:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.118 11:58:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.118 11:58:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.119 11:58:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.119 11:58:24 -- paths/export.sh@5 -- # export PATH 00:30:52.119 11:58:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.119 11:58:24 -- nvmf/common.sh@46 -- # : 0 00:30:52.119 11:58:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:52.119 11:58:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:52.119 11:58:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:52.119 11:58:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.119 11:58:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.119 11:58:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:52.119 11:58:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:52.119 11:58:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:52.119 11:58:24 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:30:52.119 11:58:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:52.119 11:58:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.119 11:58:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:52.119 11:58:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:52.119 11:58:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:52.119 11:58:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.119 11:58:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:52.119 11:58:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.119 11:58:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:52.119 11:58:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:52.119 11:58:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:52.119 11:58:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:52.119 11:58:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:52.119 11:58:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:52.119 11:58:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.119 11:58:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.119 11:58:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:52.119 11:58:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:52.119 11:58:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:52.119 11:58:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:52.119 11:58:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:52.119 11:58:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.119 11:58:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:52.119 11:58:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:52.119 11:58:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:52.119 11:58:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:52.119 11:58:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:52.119 11:58:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:52.119 Cannot find device "nvmf_tgt_br" 00:30:52.119 11:58:24 -- nvmf/common.sh@154 -- # true 00:30:52.119 11:58:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:52.119 Cannot find device "nvmf_tgt_br2" 00:30:52.119 11:58:24 -- nvmf/common.sh@155 -- # true 00:30:52.119 11:58:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:52.119 11:58:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:52.119 Cannot find device "nvmf_tgt_br" 00:30:52.119 11:58:24 -- nvmf/common.sh@157 -- # true 00:30:52.119 11:58:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:52.119 Cannot find device "nvmf_tgt_br2" 00:30:52.119 11:58:24 -- nvmf/common.sh@158 -- # true 00:30:52.119 11:58:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:52.119 11:58:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:52.119 11:58:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:52.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:52.119 11:58:24 -- nvmf/common.sh@161 -- # true 00:30:52.119 11:58:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:52.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:52.119 11:58:24 -- nvmf/common.sh@162 -- # true 00:30:52.119 11:58:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:52.119 11:58:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:52.119 11:58:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:52.119 11:58:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:52.119 11:58:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:52.119 11:58:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:52.119 11:58:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:52.119 11:58:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:52.119 11:58:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:52.119 11:58:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:52.119 11:58:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:52.119 11:58:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:52.119 11:58:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:52.119 11:58:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:52.119 11:58:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:52.119 11:58:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:52.119 11:58:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:52.119 11:58:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:52.119 11:58:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:52.119 11:58:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:52.119 11:58:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:52.119 11:58:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:52.119 11:58:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:52.380 11:58:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:52.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:30:52.380 00:30:52.380 --- 10.0.0.2 ping statistics --- 00:30:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.380 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:30:52.380 11:58:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:52.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:52.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:30:52.380 00:30:52.380 --- 10.0.0.3 ping statistics --- 00:30:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.380 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:30:52.380 11:58:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:52.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:30:52.380 00:30:52.380 --- 10.0.0.1 ping statistics --- 00:30:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.380 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:30:52.380 11:58:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.380 11:58:25 -- nvmf/common.sh@421 -- # return 0 00:30:52.380 11:58:25 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:52.380 11:58:25 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:52.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:53.209 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:53.209 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:30:53.209 11:58:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.209 11:58:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:53.209 11:58:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:53.209 11:58:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.209 11:58:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:53.209 11:58:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:53.469 11:58:26 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:30:53.469 11:58:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:53.469 11:58:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:53.469 11:58:26 -- common/autotest_common.sh@10 -- # set +x 00:30:53.469 11:58:26 -- nvmf/common.sh@469 -- # nvmfpid=92992 00:30:53.469 11:58:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:53.469 11:58:26 -- nvmf/common.sh@470 -- # waitforlisten 92992 00:30:53.469 11:58:26 -- common/autotest_common.sh@829 -- # '[' -z 92992 ']' 00:30:53.469 11:58:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.469 11:58:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.469 11:58:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.469 11:58:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.469 11:58:26 -- common/autotest_common.sh@10 -- # set +x 00:30:53.469 [2024-11-20 11:58:26.337503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:53.469 [2024-11-20 11:58:26.338027] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.469 [2024-11-20 11:58:26.476911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:53.728 [2024-11-20 11:58:26.554956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:53.728 [2024-11-20 11:58:26.555076] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.728 [2024-11-20 11:58:26.555083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.728 [2024-11-20 11:58:26.555088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.728 [2024-11-20 11:58:26.555331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.728 [2024-11-20 11:58:26.555587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.728 [2024-11-20 11:58:26.556401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.728 [2024-11-20 11:58:26.556404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.299 11:58:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.299 11:58:27 -- common/autotest_common.sh@862 -- # return 0 00:30:54.299 11:58:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:54.299 11:58:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:54.299 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.299 11:58:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:30:54.299 11:58:27 -- scripts/common.sh@311 -- # local bdf bdfs 00:30:54.299 11:58:27 -- scripts/common.sh@312 -- # local nvmes 00:30:54.299 11:58:27 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:30:54.299 11:58:27 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:54.299 11:58:27 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:30:54.299 11:58:27 -- scripts/common.sh@297 -- # local bdf= 00:30:54.299 11:58:27 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:30:54.299 11:58:27 -- scripts/common.sh@232 -- # local class 00:30:54.299 11:58:27 -- scripts/common.sh@233 -- # local subclass 00:30:54.299 11:58:27 -- scripts/common.sh@234 -- # local progif 00:30:54.299 11:58:27 -- scripts/common.sh@235 -- # printf %02x 1 00:30:54.299 11:58:27 -- scripts/common.sh@235 -- # class=01 00:30:54.299 11:58:27 -- scripts/common.sh@236 -- # printf %02x 8 00:30:54.299 11:58:27 -- scripts/common.sh@236 -- # subclass=08 00:30:54.299 11:58:27 -- scripts/common.sh@237 -- # printf %02x 2 00:30:54.299 11:58:27 -- scripts/common.sh@237 -- # progif=02 00:30:54.299 11:58:27 -- scripts/common.sh@239 -- # hash lspci 00:30:54.299 11:58:27 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:30:54.299 11:58:27 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:30:54.299 11:58:27 -- scripts/common.sh@242 -- # grep -i -- -p02 00:30:54.299 11:58:27 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:54.299 11:58:27 -- scripts/common.sh@244 -- # tr -d '"' 00:30:54.299 11:58:27 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:54.299 11:58:27 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:30:54.299 11:58:27 -- scripts/common.sh@15 -- # local i 00:30:54.299 11:58:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:30:54.299 11:58:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:54.299 11:58:27 -- scripts/common.sh@24 -- # return 0 00:30:54.299 11:58:27 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:30:54.299 11:58:27 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:54.299 11:58:27 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:30:54.299 11:58:27 -- scripts/common.sh@15 -- # local i 00:30:54.299 11:58:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:30:54.299 11:58:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:54.299 11:58:27 -- scripts/common.sh@24 -- # return 0 00:30:54.299 11:58:27 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:30:54.299 11:58:27 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:30:54.299 11:58:27 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:30:54.299 11:58:27 -- scripts/common.sh@322 -- # uname -s 00:30:54.299 11:58:27 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:30:54.299 11:58:27 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:30:54.299 11:58:27 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:30:54.299 11:58:27 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:30:54.299 11:58:27 -- scripts/common.sh@322 -- # uname -s 00:30:54.299 11:58:27 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:30:54.299 11:58:27 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:30:54.299 11:58:27 -- scripts/common.sh@327 -- # (( 2 )) 00:30:54.299 11:58:27 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:30:54.299 11:58:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:54.299 11:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:54.299 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.299 ************************************ 00:30:54.299 START TEST spdk_target_abort 00:30:54.299 ************************************ 00:30:54.299 11:58:27 -- common/autotest_common.sh@1114 -- # spdk_target 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:30:54.299 11:58:27 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:30:54.299 11:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.299 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 spdk_targetn1 00:30:54.560 11:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:54.560 11:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.560 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 [2024-11-20 11:58:27.377427] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.560 11:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:30:54.560 11:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.560 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 11:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:30:54.560 11:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.560 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 11:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:30:54.560 11:58:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.560 11:58:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 [2024-11-20 11:58:27.417619] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.560 11:58:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:54.560 11:58:27 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:30:57.867 Initializing NVMe Controllers 00:30:57.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:30:57.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:30:57.868 Initialization complete. Launching workers. 00:30:57.868 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12110, failed: 0 00:30:57.868 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1297, failed to submit 10813 00:30:57.868 success 760, unsuccess 537, failed 0 00:30:57.868 11:58:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:57.868 11:58:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:01.164 [2024-11-20 11:58:33.877743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 [2024-11-20 11:58:33.877820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747480 is same with the state(5) to be set 00:31:01.164 Initializing NVMe Controllers 00:31:01.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:01.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:01.164 Initialization complete. Launching workers. 00:31:01.164 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5934, failed: 0 00:31:01.164 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1192, failed to submit 4742 00:31:01.164 success 267, unsuccess 925, failed 0 00:31:01.164 11:58:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:01.164 11:58:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:04.457 Initializing NVMe Controllers 00:31:04.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:04.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:04.457 Initialization complete. Launching workers. 00:31:04.457 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31125, failed: 0 00:31:04.457 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2591, failed to submit 28534 00:31:04.457 success 513, unsuccess 2078, failed 0 00:31:04.457 11:58:37 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:31:04.457 11:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.457 11:58:37 -- common/autotest_common.sh@10 -- # set +x 00:31:04.457 11:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.457 11:58:37 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:04.457 11:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.457 11:58:37 -- common/autotest_common.sh@10 -- # set +x 00:31:05.027 11:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.027 11:58:37 -- target/abort_qd_sizes.sh@62 -- # killprocess 92992 00:31:05.027 11:58:37 -- common/autotest_common.sh@936 -- # '[' -z 92992 ']' 00:31:05.027 11:58:37 -- common/autotest_common.sh@940 -- # kill -0 92992 00:31:05.027 11:58:37 -- common/autotest_common.sh@941 -- # uname 00:31:05.027 11:58:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:05.027 11:58:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92992 00:31:05.027 killing process with pid 92992 00:31:05.027 11:58:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:05.028 11:58:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:05.028 11:58:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92992' 00:31:05.028 11:58:37 -- common/autotest_common.sh@955 -- # kill 92992 00:31:05.028 11:58:37 -- common/autotest_common.sh@960 -- # wait 92992 00:31:05.288 00:31:05.288 real 0m10.934s 00:31:05.288 user 0m44.700s 00:31:05.288 sys 0m1.447s 00:31:05.288 11:58:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:05.288 ************************************ 00:31:05.288 END TEST spdk_target_abort 00:31:05.288 ************************************ 00:31:05.288 11:58:38 -- common/autotest_common.sh@10 -- # set +x 00:31:05.288 11:58:38 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:31:05.288 11:58:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:05.288 11:58:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:05.288 11:58:38 -- common/autotest_common.sh@10 -- # set +x 00:31:05.288 ************************************ 00:31:05.288 START TEST kernel_target_abort 00:31:05.288 ************************************ 00:31:05.288 11:58:38 -- common/autotest_common.sh@1114 -- # kernel_target 00:31:05.288 11:58:38 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:31:05.288 11:58:38 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:31:05.288 11:58:38 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:31:05.288 11:58:38 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:31:05.288 11:58:38 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:31:05.288 11:58:38 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:05.288 11:58:38 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:05.288 11:58:38 -- nvmf/common.sh@627 -- # local block nvme 00:31:05.288 11:58:38 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:31:05.288 11:58:38 -- nvmf/common.sh@630 -- # modprobe nvmet 00:31:05.547 11:58:38 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:05.547 11:58:38 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:05.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:05.805 Waiting for block devices as requested 00:31:06.063 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:06.063 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:31:06.063 11:58:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:31:06.063 11:58:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:06.063 11:58:39 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:31:06.063 11:58:39 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:31:06.063 11:58:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:06.063 No valid GPT data, bailing 00:31:06.063 11:58:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:06.063 11:58:39 -- scripts/common.sh@393 -- # pt= 00:31:06.063 11:58:39 -- scripts/common.sh@394 -- # return 1 00:31:06.063 11:58:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:31:06.063 11:58:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:31:06.063 11:58:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:06.330 11:58:39 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:31:06.330 11:58:39 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:31:06.330 11:58:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:06.330 No valid GPT data, bailing 00:31:06.330 11:58:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:06.330 11:58:39 -- scripts/common.sh@393 -- # pt= 00:31:06.330 11:58:39 -- scripts/common.sh@394 -- # return 1 00:31:06.330 11:58:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:31:06.330 11:58:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:31:06.330 11:58:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:31:06.330 11:58:39 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:31:06.330 11:58:39 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:31:06.330 11:58:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:31:06.330 No valid GPT data, bailing 00:31:06.330 11:58:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:31:06.330 11:58:39 -- scripts/common.sh@393 -- # pt= 00:31:06.330 11:58:39 -- scripts/common.sh@394 -- # return 1 00:31:06.330 11:58:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:31:06.330 11:58:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:31:06.330 11:58:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:31:06.330 11:58:39 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:31:06.330 11:58:39 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:31:06.330 11:58:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:31:06.330 No valid GPT data, bailing 00:31:06.330 11:58:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:31:06.330 11:58:39 -- scripts/common.sh@393 -- # pt= 00:31:06.330 11:58:39 -- scripts/common.sh@394 -- # return 1 00:31:06.330 11:58:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:31:06.330 11:58:39 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:31:06.330 11:58:39 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:31:06.330 11:58:39 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:06.330 11:58:39 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:06.330 11:58:39 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:31:06.330 11:58:39 -- nvmf/common.sh@654 -- # echo 1 00:31:06.330 11:58:39 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:31:06.330 11:58:39 -- nvmf/common.sh@656 -- # echo 1 00:31:06.330 11:58:39 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:31:06.330 11:58:39 -- nvmf/common.sh@663 -- # echo tcp 00:31:06.330 11:58:39 -- nvmf/common.sh@664 -- # echo 4420 00:31:06.330 11:58:39 -- nvmf/common.sh@665 -- # echo ipv4 00:31:06.330 11:58:39 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:06.330 11:58:39 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0f74192-2f63-41a2-a029-58386886737a --hostid=f0f74192-2f63-41a2-a029-58386886737a -a 10.0.0.1 -t tcp -s 4420 00:31:06.330 00:31:06.330 Discovery Log Number of Records 2, Generation counter 2 00:31:06.330 =====Discovery Log Entry 0====== 00:31:06.330 trtype: tcp 00:31:06.330 adrfam: ipv4 00:31:06.330 subtype: current discovery subsystem 00:31:06.330 treq: not specified, sq flow control disable supported 00:31:06.330 portid: 1 00:31:06.330 trsvcid: 4420 00:31:06.330 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:06.330 traddr: 10.0.0.1 00:31:06.330 eflags: none 00:31:06.330 sectype: none 00:31:06.330 =====Discovery Log Entry 1====== 00:31:06.330 trtype: tcp 00:31:06.330 adrfam: ipv4 00:31:06.330 subtype: nvme subsystem 00:31:06.330 treq: not specified, sq flow control disable supported 00:31:06.330 portid: 1 00:31:06.330 trsvcid: 4420 00:31:06.330 subnqn: kernel_target 00:31:06.330 traddr: 10.0.0.1 00:31:06.330 eflags: none 00:31:06.330 sectype: none 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:06.330 11:58:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:09.623 Initializing NVMe Controllers 00:31:09.623 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:09.623 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:09.623 Initialization complete. Launching workers. 00:31:09.623 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 37983, failed: 0 00:31:09.623 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 37983, failed to submit 0 00:31:09.623 success 0, unsuccess 37983, failed 0 00:31:09.623 11:58:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:09.623 11:58:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:12.916 Initializing NVMe Controllers 00:31:12.916 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:12.916 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:12.916 Initialization complete. Launching workers. 00:31:12.916 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77022, failed: 0 00:31:12.916 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 37302, failed to submit 39720 00:31:12.916 success 0, unsuccess 37302, failed 0 00:31:12.916 11:58:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:12.916 11:58:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:16.214 Initializing NVMe Controllers 00:31:16.214 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:16.214 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:16.214 Initialization complete. Launching workers. 00:31:16.214 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 101456, failed: 0 00:31:16.214 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25366, failed to submit 76090 00:31:16.214 success 0, unsuccess 25366, failed 0 00:31:16.214 11:58:48 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:31:16.214 11:58:48 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:31:16.214 11:58:48 -- nvmf/common.sh@677 -- # echo 0 00:31:16.214 11:58:48 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:31:16.214 11:58:48 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:16.214 11:58:48 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:16.214 11:58:48 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:31:16.214 11:58:48 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:31:16.214 11:58:48 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:31:16.214 00:31:16.214 real 0m10.669s 00:31:16.214 user 0m6.248s 00:31:16.214 sys 0m2.111s 00:31:16.214 11:58:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:16.214 11:58:48 -- common/autotest_common.sh@10 -- # set +x 00:31:16.214 ************************************ 00:31:16.214 END TEST kernel_target_abort 00:31:16.214 ************************************ 00:31:16.214 11:58:49 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:31:16.214 11:58:49 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:31:16.214 11:58:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:16.214 11:58:49 -- nvmf/common.sh@116 -- # sync 00:31:16.214 11:58:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:16.214 11:58:49 -- nvmf/common.sh@119 -- # set +e 00:31:16.214 11:58:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:16.214 11:58:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:16.214 rmmod nvme_tcp 00:31:16.214 rmmod nvme_fabrics 00:31:16.214 rmmod nvme_keyring 00:31:16.214 11:58:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:16.214 11:58:49 -- nvmf/common.sh@123 -- # set -e 00:31:16.214 11:58:49 -- nvmf/common.sh@124 -- # return 0 00:31:16.214 11:58:49 -- nvmf/common.sh@477 -- # '[' -n 92992 ']' 00:31:16.214 11:58:49 -- nvmf/common.sh@478 -- # killprocess 92992 00:31:16.214 11:58:49 -- common/autotest_common.sh@936 -- # '[' -z 92992 ']' 00:31:16.214 11:58:49 -- common/autotest_common.sh@940 -- # kill -0 92992 00:31:16.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92992) - No such process 00:31:16.214 Process with pid 92992 is not found 00:31:16.214 11:58:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 92992 is not found' 00:31:16.214 11:58:49 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:16.214 11:58:49 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:17.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:17.154 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:31:17.154 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:31:17.154 11:58:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:17.154 11:58:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:17.154 11:58:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.154 11:58:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:17.154 11:58:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.154 11:58:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.154 11:58:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.154 11:58:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:17.154 00:31:17.154 real 0m25.529s 00:31:17.154 user 0m52.354s 00:31:17.154 sys 0m5.342s 00:31:17.154 11:58:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:17.154 11:58:50 -- common/autotest_common.sh@10 -- # set +x 00:31:17.154 ************************************ 00:31:17.154 END TEST nvmf_abort_qd_sizes 00:31:17.154 ************************************ 00:31:17.154 11:58:50 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:17.154 11:58:50 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:31:17.154 11:58:50 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:31:17.154 11:58:50 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:31:17.154 11:58:50 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:31:17.154 11:58:50 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:31:17.154 11:58:50 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:31:17.154 11:58:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:17.154 11:58:50 -- common/autotest_common.sh@10 -- # set +x 00:31:17.154 11:58:50 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:31:17.154 11:58:50 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:31:17.154 11:58:50 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:31:17.154 11:58:50 -- common/autotest_common.sh@10 -- # set +x 00:31:19.719 INFO: APP EXITING 00:31:19.719 INFO: killing all VMs 00:31:19.719 INFO: killing vhost app 00:31:19.719 INFO: EXIT DONE 00:31:20.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:20.290 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:31:20.290 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:31:21.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:21.228 Cleaning 00:31:21.228 Removing: /var/run/dpdk/spdk0/config 00:31:21.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:21.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:21.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:21.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:21.228 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:21.228 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:21.228 Removing: /var/run/dpdk/spdk1/config 00:31:21.228 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:21.228 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:21.228 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:21.228 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:21.228 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:21.228 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:21.228 Removing: /var/run/dpdk/spdk2/config 00:31:21.228 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:21.228 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:21.228 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:21.228 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:21.228 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:21.228 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:21.228 Removing: /var/run/dpdk/spdk3/config 00:31:21.228 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:21.228 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:21.228 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:21.228 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:21.228 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:21.228 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:21.228 Removing: /var/run/dpdk/spdk4/config 00:31:21.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:21.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:21.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:21.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:21.228 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:21.228 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:21.228 Removing: /dev/shm/nvmf_trace.0 00:31:21.487 Removing: /dev/shm/spdk_tgt_trace.pid55787 00:31:21.487 Removing: /var/run/dpdk/spdk0 00:31:21.487 Removing: /var/run/dpdk/spdk1 00:31:21.487 Removing: /var/run/dpdk/spdk2 00:31:21.487 Removing: /var/run/dpdk/spdk3 00:31:21.487 Removing: /var/run/dpdk/spdk4 00:31:21.487 Removing: /var/run/dpdk/spdk_pid55635 00:31:21.487 Removing: /var/run/dpdk/spdk_pid55787 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56103 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56373 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56555 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56640 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56739 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56836 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56875 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56910 00:31:21.487 Removing: /var/run/dpdk/spdk_pid56979 00:31:21.487 Removing: /var/run/dpdk/spdk_pid57113 00:31:21.487 Removing: /var/run/dpdk/spdk_pid57741 00:31:21.487 Removing: /var/run/dpdk/spdk_pid57799 00:31:21.488 Removing: /var/run/dpdk/spdk_pid57863 00:31:21.488 Removing: /var/run/dpdk/spdk_pid57891 00:31:21.488 Removing: /var/run/dpdk/spdk_pid57971 00:31:21.488 Removing: /var/run/dpdk/spdk_pid57999 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58074 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58102 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58153 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58183 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58229 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58260 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58414 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58449 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58531 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58606 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58625 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58689 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58707 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58743 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58757 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58797 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58811 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58851 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58865 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58900 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58919 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58954 00:31:21.488 Removing: /var/run/dpdk/spdk_pid58973 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59007 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59027 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59056 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59081 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59110 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59130 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59164 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59184 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59218 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59240 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59269 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59294 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59323 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59349 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59378 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59392 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59432 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59448 00:31:21.488 Removing: /var/run/dpdk/spdk_pid59488 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59502 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59542 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59559 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59601 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59619 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59657 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59676 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59714 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59733 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59769 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59846 00:31:21.747 Removing: /var/run/dpdk/spdk_pid59965 00:31:21.747 Removing: /var/run/dpdk/spdk_pid60394 00:31:21.747 Removing: /var/run/dpdk/spdk_pid67340 00:31:21.747 Removing: /var/run/dpdk/spdk_pid67691 00:31:21.747 Removing: /var/run/dpdk/spdk_pid70169 00:31:21.747 Removing: /var/run/dpdk/spdk_pid70556 00:31:21.747 Removing: /var/run/dpdk/spdk_pid70801 00:31:21.747 Removing: /var/run/dpdk/spdk_pid70848 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71116 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71122 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71177 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71235 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71295 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71339 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71341 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71361 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71398 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71406 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71464 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71522 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71582 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71628 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71630 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71656 00:31:21.747 Removing: /var/run/dpdk/spdk_pid71948 00:31:21.747 Removing: /var/run/dpdk/spdk_pid72106 00:31:21.747 Removing: /var/run/dpdk/spdk_pid72368 00:31:21.747 Removing: /var/run/dpdk/spdk_pid72418 00:31:21.747 Removing: /var/run/dpdk/spdk_pid72804 00:31:21.747 Removing: /var/run/dpdk/spdk_pid73339 00:31:21.747 Removing: /var/run/dpdk/spdk_pid73764 00:31:21.747 Removing: /var/run/dpdk/spdk_pid74728 00:31:21.747 Removing: /var/run/dpdk/spdk_pid75714 00:31:21.747 Removing: /var/run/dpdk/spdk_pid75837 00:31:21.747 Removing: /var/run/dpdk/spdk_pid75899 00:31:21.747 Removing: /var/run/dpdk/spdk_pid77373 00:31:21.747 Removing: /var/run/dpdk/spdk_pid77620 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78062 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78172 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78318 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78364 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78410 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78456 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78610 00:31:21.747 Removing: /var/run/dpdk/spdk_pid78767 00:31:21.747 Removing: /var/run/dpdk/spdk_pid79022 00:31:21.747 Removing: /var/run/dpdk/spdk_pid79139 00:31:21.747 Removing: /var/run/dpdk/spdk_pid79560 00:31:21.747 Removing: /var/run/dpdk/spdk_pid79952 00:31:21.747 Removing: /var/run/dpdk/spdk_pid79954 00:31:21.747 Removing: /var/run/dpdk/spdk_pid82227 00:31:21.747 Removing: /var/run/dpdk/spdk_pid82539 00:31:21.747 Removing: /var/run/dpdk/spdk_pid83058 00:31:21.747 Removing: /var/run/dpdk/spdk_pid83061 00:31:21.747 Removing: /var/run/dpdk/spdk_pid83398 00:31:21.747 Removing: /var/run/dpdk/spdk_pid83412 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83437 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83462 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83467 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83610 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83612 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83720 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83722 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83830 00:31:22.007 Removing: /var/run/dpdk/spdk_pid83838 00:31:22.007 Removing: /var/run/dpdk/spdk_pid84326 00:31:22.007 Removing: /var/run/dpdk/spdk_pid84375 00:31:22.007 Removing: /var/run/dpdk/spdk_pid84526 00:31:22.007 Removing: /var/run/dpdk/spdk_pid84643 00:31:22.007 Removing: /var/run/dpdk/spdk_pid85055 00:31:22.007 Removing: /var/run/dpdk/spdk_pid85308 00:31:22.007 Removing: /var/run/dpdk/spdk_pid85801 00:31:22.007 Removing: /var/run/dpdk/spdk_pid86360 00:31:22.007 Removing: /var/run/dpdk/spdk_pid86829 00:31:22.007 Removing: /var/run/dpdk/spdk_pid86914 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87004 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87089 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87246 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87331 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87420 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87507 00:31:22.007 Removing: /var/run/dpdk/spdk_pid87856 00:31:22.007 Removing: /var/run/dpdk/spdk_pid88559 00:31:22.007 Removing: /var/run/dpdk/spdk_pid89907 00:31:22.007 Removing: /var/run/dpdk/spdk_pid90108 00:31:22.007 Removing: /var/run/dpdk/spdk_pid90397 00:31:22.007 Removing: /var/run/dpdk/spdk_pid90708 00:31:22.007 Removing: /var/run/dpdk/spdk_pid91280 00:31:22.007 Removing: /var/run/dpdk/spdk_pid91290 00:31:22.007 Removing: /var/run/dpdk/spdk_pid91663 00:31:22.007 Removing: /var/run/dpdk/spdk_pid91822 00:31:22.007 Removing: /var/run/dpdk/spdk_pid91989 00:31:22.007 Removing: /var/run/dpdk/spdk_pid92090 00:31:22.007 Removing: /var/run/dpdk/spdk_pid92261 00:31:22.007 Removing: /var/run/dpdk/spdk_pid92376 00:31:22.007 Removing: /var/run/dpdk/spdk_pid93061 00:31:22.007 Removing: /var/run/dpdk/spdk_pid93091 00:31:22.007 Removing: /var/run/dpdk/spdk_pid93132 00:31:22.007 Removing: /var/run/dpdk/spdk_pid93388 00:31:22.007 Removing: /var/run/dpdk/spdk_pid93418 00:31:22.007 Removing: /var/run/dpdk/spdk_pid93454 00:31:22.007 Clean 00:31:22.267 killing process with pid 49946 00:31:22.267 killing process with pid 49947 00:31:22.267 11:58:55 -- common/autotest_common.sh@1446 -- # return 0 00:31:22.267 11:58:55 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:31:22.267 11:58:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:22.267 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.267 11:58:55 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:31:22.267 11:58:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:22.267 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.267 11:58:55 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:22.267 11:58:55 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:22.267 11:58:55 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:22.267 11:58:55 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:31:22.267 11:58:55 -- spdk/autotest.sh@383 -- # hostname 00:31:22.267 11:58:55 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:22.528 geninfo: WARNING: invalid characters removed from testname! 00:31:44.473 11:59:16 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.009 11:59:19 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:48.918 11:59:21 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.839 11:59:23 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:52.744 11:59:25 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:54.651 11:59:27 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:56.560 11:59:29 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:56.560 11:59:29 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:31:56.560 11:59:29 -- common/autotest_common.sh@1690 -- $ lcov --version 00:31:56.560 11:59:29 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:31:56.560 11:59:29 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:31:56.560 11:59:29 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:31:56.560 11:59:29 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:31:56.560 11:59:29 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:31:56.560 11:59:29 -- scripts/common.sh@335 -- $ IFS=.-: 00:31:56.560 11:59:29 -- scripts/common.sh@335 -- $ read -ra ver1 00:31:56.560 11:59:29 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:56.560 11:59:29 -- scripts/common.sh@336 -- $ read -ra ver2 00:31:56.560 11:59:29 -- scripts/common.sh@337 -- $ local 'op=<' 00:31:56.560 11:59:29 -- scripts/common.sh@339 -- $ ver1_l=2 00:31:56.560 11:59:29 -- scripts/common.sh@340 -- $ ver2_l=1 00:31:56.560 11:59:29 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:31:56.560 11:59:29 -- scripts/common.sh@343 -- $ case "$op" in 00:31:56.560 11:59:29 -- scripts/common.sh@344 -- $ : 1 00:31:56.560 11:59:29 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:31:56.560 11:59:29 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.560 11:59:29 -- scripts/common.sh@364 -- $ decimal 1 00:31:56.560 11:59:29 -- scripts/common.sh@352 -- $ local d=1 00:31:56.560 11:59:29 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:56.560 11:59:29 -- scripts/common.sh@354 -- $ echo 1 00:31:56.560 11:59:29 -- scripts/common.sh@364 -- $ ver1[v]=1 00:31:56.560 11:59:29 -- scripts/common.sh@365 -- $ decimal 2 00:31:56.560 11:59:29 -- scripts/common.sh@352 -- $ local d=2 00:31:56.560 11:59:29 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:56.560 11:59:29 -- scripts/common.sh@354 -- $ echo 2 00:31:56.560 11:59:29 -- scripts/common.sh@365 -- $ ver2[v]=2 00:31:56.560 11:59:29 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:31:56.560 11:59:29 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:31:56.560 11:59:29 -- scripts/common.sh@367 -- $ return 0 00:31:56.560 11:59:29 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.560 11:59:29 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:31:56.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.560 --rc genhtml_branch_coverage=1 00:31:56.560 --rc genhtml_function_coverage=1 00:31:56.560 --rc genhtml_legend=1 00:31:56.560 --rc geninfo_all_blocks=1 00:31:56.560 --rc geninfo_unexecuted_blocks=1 00:31:56.560 00:31:56.560 ' 00:31:56.560 11:59:29 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:31:56.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.560 --rc genhtml_branch_coverage=1 00:31:56.560 --rc genhtml_function_coverage=1 00:31:56.560 --rc genhtml_legend=1 00:31:56.560 --rc geninfo_all_blocks=1 00:31:56.560 --rc geninfo_unexecuted_blocks=1 00:31:56.560 00:31:56.560 ' 00:31:56.560 11:59:29 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:31:56.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.560 --rc genhtml_branch_coverage=1 00:31:56.560 --rc genhtml_function_coverage=1 00:31:56.560 --rc genhtml_legend=1 00:31:56.560 --rc geninfo_all_blocks=1 00:31:56.560 --rc geninfo_unexecuted_blocks=1 00:31:56.560 00:31:56.560 ' 00:31:56.560 11:59:29 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:31:56.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.560 --rc genhtml_branch_coverage=1 00:31:56.560 --rc genhtml_function_coverage=1 00:31:56.560 --rc genhtml_legend=1 00:31:56.560 --rc geninfo_all_blocks=1 00:31:56.560 --rc geninfo_unexecuted_blocks=1 00:31:56.560 00:31:56.560 ' 00:31:56.560 11:59:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:56.560 11:59:29 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:56.560 11:59:29 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.560 11:59:29 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.560 11:59:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.560 11:59:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.560 11:59:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.560 11:59:29 -- paths/export.sh@5 -- $ export PATH 00:31:56.560 11:59:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.560 11:59:29 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:56.820 11:59:29 -- common/autobuild_common.sh@440 -- $ date +%s 00:31:56.820 11:59:29 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732103969.XXXXXX 00:31:56.820 11:59:29 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732103969.FE330v 00:31:56.820 11:59:29 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:31:56.820 11:59:29 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:31:56.820 11:59:29 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:56.820 11:59:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:56.820 11:59:29 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:56.820 11:59:29 -- common/autobuild_common.sh@456 -- $ get_config_params 00:31:56.820 11:59:29 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:56.820 11:59:29 -- common/autotest_common.sh@10 -- $ set +x 00:31:56.820 11:59:29 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:31:56.820 11:59:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:56.820 11:59:29 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:56.820 11:59:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:56.820 11:59:29 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:56.820 11:59:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:56.820 11:59:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:56.820 11:59:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:56.820 11:59:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:56.820 11:59:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:56.820 11:59:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:56.820 + [[ -n 5390 ]] 00:31:56.820 + sudo kill 5390 00:31:56.831 [Pipeline] } 00:31:56.850 [Pipeline] // timeout 00:31:56.856 [Pipeline] } 00:31:56.874 [Pipeline] // stage 00:31:56.880 [Pipeline] } 00:31:56.897 [Pipeline] // catchError 00:31:56.907 [Pipeline] stage 00:31:56.911 [Pipeline] { (Stop VM) 00:31:56.927 [Pipeline] sh 00:31:57.215 + vagrant halt 00:31:59.764 ==> default: Halting domain... 00:32:07.920 [Pipeline] sh 00:32:08.203 + vagrant destroy -f 00:32:10.744 ==> default: Removing domain... 00:32:10.758 [Pipeline] sh 00:32:11.100 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:32:11.110 [Pipeline] } 00:32:11.128 [Pipeline] // stage 00:32:11.135 [Pipeline] } 00:32:11.151 [Pipeline] // dir 00:32:11.158 [Pipeline] } 00:32:11.175 [Pipeline] // wrap 00:32:11.184 [Pipeline] } 00:32:11.195 [Pipeline] // catchError 00:32:11.204 [Pipeline] stage 00:32:11.205 [Pipeline] { (Epilogue) 00:32:11.216 [Pipeline] sh 00:32:11.497 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:16.790 [Pipeline] catchError 00:32:16.792 [Pipeline] { 00:32:16.806 [Pipeline] sh 00:32:17.092 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:17.092 Artifacts sizes are good 00:32:17.102 [Pipeline] } 00:32:17.116 [Pipeline] // catchError 00:32:17.127 [Pipeline] archiveArtifacts 00:32:17.134 Archiving artifacts 00:32:17.289 [Pipeline] cleanWs 00:32:17.301 [WS-CLEANUP] Deleting project workspace... 00:32:17.301 [WS-CLEANUP] Deferred wipeout is used... 00:32:17.308 [WS-CLEANUP] done 00:32:17.310 [Pipeline] } 00:32:17.326 [Pipeline] // stage 00:32:17.331 [Pipeline] } 00:32:17.347 [Pipeline] // node 00:32:17.353 [Pipeline] End of Pipeline 00:32:17.392 Finished: SUCCESS